专利摘要:
the present invention relates to a method of decoding. the decoding method includes: analyzing (142) a data stream, and when partitioning a block of images with a size of 2n × 2n using a quadtree partition pattern is allowed, processing a first block of 2n × n sub-images and a second 2n × n subimage block or a first n × 2n subimage block and a second n × 2n subimage block in a constraint subimage processing mode, so that an image block partition pattern obtained for the second block partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern obtained after the 2n × 2n image block is partitioned using the quadtree partition pattern, where the first 2n × neo second image block 2n × n sub-image block or the first n × 2n sub-image block and the second n × 2n sub-image block are obtained by partitioning the image block with the size of 2n × 2n.
公开号:BR112018077218A2
申请号:R112018077218
申请日:2017-06-26
公开日:2020-01-28
发明作者:Yang Haitao;Gao Shan;Ma Siwei;Wang Zhao
申请人:Huawei Tech Co Ltd;
IPC主号:
专利说明:

Descriptive Report of the Invention Patent for METHOD AND ENCODING EQUIPMENT AND METHOD AND APPLIANCE FOR DECODING
FIELD OF TECHNIQUE [001] The present invention relates to the field of encoding, decoding, and compressing video, and specifically, to a technology for partitioning a plurality of blocks of encoded images in encoding and decoding processes.
BACKGROUND [002] Digital video capabilities can be incorporated into a wide range of devices, including a digital television, a digital live broadcast system, a wireless broadcast system, a personal digital assistant (PDA), a laptop computer or desktop, a tablet computer, an e-book reader, a digital camera, a digital recording device, a digital mediaplayer, a video game device, a video game console, a cellular or satellite radio phone, a video conferencing apparatus, a video streaming apparatus, and the like. The digital video device implements video compression technologies, for example, video compression technologies described in standards defined in the MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG- 4 Part 10 Advanced Video Coding (AVC), and ITU-T H.265 / High Efficiency Video Coding (HEVC) and standards extension parts, thereby transmitting and receiving digital video information more efficiently. The video device can implement video encoding and decoding technologies to transmit, receive, encode, decode, and / or store digital video information more efficiently.
[003] In the video encoding and decoding field, a frame is a complete image, and a plurality of frame of
Petition 870180168063, of 12/27/2018, p. 152/306
2/112 images can be played back after forming a video format according to a specified order and frame rate. After the frame rate reaches a specified value, when an interval between two frames is less than a human eye resolution limit, a short-term visual permanence occurs. In this case, the image frames can be displayed on a screen apparently dynamically. A video file can be compressed based on compression encoding of a single frame digital image. A large amount of repetitive representation information exists in a scanned image, and is referred to as redundant information. An image frame usually includes many parts with the same or similar spatial structure. For example, there is usually a close correlation and similarity between colors of sampling points on the same object or background. In a group of multiple frame images, there is basically a strong correlation between an image frame and a previous image frame or next image frame of the image frame. There is a slight difference between pixel values for describing information, and pixel values can all be compressed. Similarly, a video file includes not only redundant spatial information but also a large amount of redundant time information, and this is caused by a video's composition structure. For example, a frame rate for video sampling usually ranges from 25 frames / second to 30 frames / second, and in a special case, it can be 60 frames / second. That is, a sampling time interval between two adjacent frames is at least 1/30 seconds to 1/25 seconds. Within such a short time, there is basically a large amount of similar information in image frames obtained through sampling, and there is a strong correlation between the frames. However, tables are
Petition 870180168063, of 12/27/2018, p. 153/306
3/112 separately recorded on an original digital video recording system, and these similar coherent characteristics are not considered or used. This causes quite a huge amount of repeated redundant data. In addition, research has shown that, from a perspective of a psychological characteristic, to be specific, the visual sensitivity of human eyes, the video information includes a part that can be compressed, that is, a visual redundancy. Visual redundancy means that an appropriate bitstream compression of the video is performed using a physiological characteristic that human eyes are relatively sensitive to a change in luminance but relatively insensitive to a change in chrominance. In a region of high luminosity, the sensitivity of the human eye's vision to a change in luminance represents a downward trend. However, a human eye is relatively sensitive to an object's edge, and is relatively insensitive to an internal region; and is relatively sensitive to a total structure, and is relatively insensitive to a change in internal detail. As the final served objects of video image information are a human group, these characteristics of human eyes can be fully used to compress the original video image information, in order to achieve a better compression effect. In addition to the spatial redundancy, time redundancy, and vision redundancy above, the video image information still includes a series of redundant information such as an information entropy redundancy, a structural redundancy, a knowledge redundancy, a redundancy of importance . One purpose of video compression encoding is to remove redundant information from a video stream using various technical methods, in order to save storage space and width
Petition 870180168063, of 12/27/2018, p. 154/306
4/112 transmission band.
[004] Referring to a current technology development status, video compression processing technologies mainly include intraprediction, interpretation, transform and quantization, entropy coding, and unlock filtering processing. Within the international scope, there are four predominant trends in compression encoding schemes in existing video compression encoding standards: chrominance sampling, predictive encoding, transform encoding and quantization encoding.
[005] Chrominance sampling: In this way, the visual psychological characteristics of human eyes are fully utilized, and a quantity of data used for the description of a single element tries to be minimized by starting from the representation of underlying data. Most television systems use luminance-chrominance-chrominance (YUV) color coding which is a standard widely used in European television systems. The YUV color space includes a luminance signal Y and two color difference signals U and V, and the three components are independent of each other. The YUV color mode is more flexible due to a representation mode in which the three components are separated from each other, occupies low bandwidth during transmission and is advantageous over a conventional red, green and blue (RGB) color model . For example, YUV 4: 2: 0 indicates that the two components of U and V chrominance are both only one half of the Y luminance in both a horizontal and a vertical direction; that is, for four sampling pixels, there are four Y luminance components, only one U chrominance component, and only one V chrominance component. In this mode, a quantity of data is further decreased to only approximately 33% of a quantity of
Petition 870180168063, of 12/27/2018, p. 155/306
5/112 original data. The compression of a video using physiological and visual characteristics of the human eyes and such a chrominance sampling mode is one of the video data compression modes widely used today.
[006] Predictive encoding: To be specific, a frame to be currently encoded is predicted using data information from a previously encoded frame. A predicted value is obtained through prediction, and it is not at all the same as an actual value. There is a residual value between the predicted value and the actual value. If the prediction is more appropriate, a predicted value is closer to an actual value and a residual value is less. In this case, a quantity of data can be greatly reduced by encoding the residual value. During decoding on one decoder side, an initial image is restored and reconstructed using the residual value plus the predicted value. This is a basic idea of predictive coding. In a predominant trend coding pattern, predictive coding is classified into two basic types: intraprediction and interpredition.
[007] Transform encoding: This means changing an information sampling value from a current domain to another manually defined domain (which is usually referred to as a transform domain) according to a transform function in a shape, and then perform compression encoding based on an information distribution characteristic in the transform domain, rather than directly encoding the original spatial domain information. One cause of transform encoding is: Video image data usually has a large correlation of data in a spatial domain, leading to a large amount of redundant information; and direct encoding requires a very large number of bits. However, a correlation of data in the transform domain is greatly reduced; The informations
Petition 870180168063, of 12/27/2018, p. 156/306
6/112 redundant to be encoded are reduced, and the amount of data required to encode is also greatly reduced correspondingly. In this mode, a relatively high compression ratio can be obtained, and a relatively good compression effect can be achieved. A typical transform encoding includes the Karhunen-Loeve transform (K-L), the Fourier transform and the like. The discrete integer cosine transform (DCT) is a transform coding scheme commonly used in many international standards.
[008] Quantization coding: The transform transform itself above not currently used to compress data, a quantization process is a powerful means of compressing data, and is also a major cause of data loss and lossy compression. The quantization process is a process of forcibly approximating an input value in a greater dynamic range to an output value in a smaller dynamic range. The quantized input value has a larger range, and therefore needs to be expressed using more bits. However, the forcibly approximate output value has a smaller range, and therefore can be expressed using a smaller amount of bits. Each quantized input is normalized to a quantized output, that is, it is quantized to an order of magnitude. These orders of magnitude are generally referred to as quantization levels (which are usually specified by an encoder).
[009] The compression encoding scheme above is used in an encoding algorithm based on a hybrid encoding architecture, and an encoder control module selects, based on local characteristics of different blocks of images in a video frame , an encoding scheme used for image blocks. A frequency domain or domain prediction
Petition 870180168063, of 12/27/2018, p. 157/306
7/112 spatial is performed on a block that must be coded through intraprediction, a motion compensation prediction is performed on a block that must be coded through interpredition, then a transformation and quantization processing is performed on a predicted residue for form a residual coefficient, and a final data stream is finally generated by an entropy encoder. To avoid accumulation of prediction error, an intraframe or interpredition reference signal is obtained using a decoder module on one encoder side. Inverse quantization and inverse transform are performed on the residual coefficient obtained through transform and quantization, to reconstruct a residual signal, and then a reconstructed image is obtained by adding the residual signal to a prediction reference signal. During loop filtering, pixel correction is performed on the reconstructed image, thereby improving the encoding quality of the reconstructed image.
[0010] In a process of compressing an image using the video compression processing technology above, a block partition must first be performed on an image to be encoded, that is, an original image. In H.264 / AVC, a size of a coding block (Coding Block, CB) is fixed, but in H.265 / HEVC, a coding tree block (Coding Tree Block, CTB) can be directly used as a CB, or can be additionally partitioned into a plurality of small CBs in a quadtree shape. Therefore, in H.265 / HEVC, a CN size is variable, a maximum luminance CB is 64x64, and a minimum luminance CB is 8x8. A large CB can greatly improve the coding efficiency in a flat region; and can well process local image details, so that the prediction of a complex image is more accurate. As a video becomes
Petition 870180168063, of 12/27/2018, p. 158/306
8/112 a form of predominant social media trend, an increasingly high requirement for video compression performance is increased. Therefore, a more flexible and efficient image partition pattern needs to be provided to meet this requirement.
SUMMARY [0011] The present invention provides an encoding method and apparatus and a decoding method and apparatus, to reduce redundancy and improve encoding and decoding efficiency using a quadtree partition pattern plus binary tree.
[0012] According to a first aspect of the present invention, an encoding method is provided, where the encoding method includes: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of 2NxN sub-images and the second block of 2NxN sub-images or the first block of Nx2N subimages and the second block of Nx2N subimages are obtained by partitioning the block of images with the size of 2Nx2N; and [0013] the restriction subimage processing mode includes: determining whether the first block of subimages needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block of subimages and the first block of
Petition 870180168063, of 12/27/2018, p. 159/306
9/112 sub-images partitioned; and [0014] determine whether the second block of sub-images needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partition the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0015] It should be noted that in the above method, a characteristic of the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern of partition block of images (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern can also be described as the pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a size of a block of sub-images obtained after at least one of the first block of sub-images or the
Petition 870180168063, of 12/27/2018, p. 160/306
10/112 second block of sub-images to be partitioned is not NxN. That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[0016] In the present invention, a constraint processing is performed on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first Nx2N subimage and the second block of Nx2N subimages in the restriction subimage processing mode introduced. , thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[0017] According to an implementation of the coding method provided in the first aspect of the present invention, in the mode of restriction subimage processing, the partition pattern of the first block of subimages is of a first set of partition patterns, and the partition pattern of the second block of sub-images is of a second set of partition patterns, where the first set of partition patterns includes at least one partition pattern different from all partition patterns in the second set of partition patterns. For example, the first set of partition patterns can include vertical partition and horizontal partition, and the second set of partition patterns includes only horizontal partition or only vertical partition, that is, the second set of partition patterns is a subset of the first set of partition patterns. Specifically, a first set of partition patterns for the first block of sub-images with the size of 2NxN includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the horizontal partition pattern;
Petition 870180168063, of 12/27/2018, p. 161/306
11/112 and a first set of partition patterns for the first block of subimages with the size of Nx2N includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the vertical partition pattern. This limitation mode can be used to avoid using, in a process of processing the first block of sub-images and the second block of sub-images, a partition pattern of partitioning the block of images 2Nx2N into four blocks of sub-images with a size of NxN, thereby reducing redundancy. In addition, in a process of performing decoding processing on a second block of sub-images, read code words can be reduced because the number of partition methods used for the second block of sub-images is limited.
[0018] According to another implementation of the encoding method provided in the first aspect of the present invention, if a vertical partition is performed on the first block of 2NxN sub-images, when only horizontal partition is allowed to partition the second block of 2NxN sub-images, in encoding method, only perform encoding to determine whether the second block of 2NxN subimages is additionally partitioned can be allowed, without the need to perform encoding to determine a specific partition pattern of the second block of 2NxN subimages; and if the second block of 2NxN sub-images needs to be further partitioned, the partition pattern of the second block of 2NxN sub-images is horizontal partition by default. In this mode, the coding words for coding can be further reduced.
[0019] According to another implementation of the coding method provided in the first aspect of the present invention, if a horizontal partition is performed on the first block of Nx2N sub-images, when only vertical partition is allowed to partition the
Petition 870180168063, of 12/27/2018, p. 162/306
12/112 second block of Nx2N sub-images, in the encoding method, only perform encoding to determine whether the second block of Nx2N sub-images is additionally partitioned can be allowed, without the need to perform encoding to determine a specific partition pattern of the second block of sub-images Nx2N; and if the second block of Nx2N sub-images needs to be further partitioned, the partition pattern of the second block of Nx2N sub-images is vertical partition by default. In this mode, the code words required for coding can be further reduced.
[0020] According to another implementation of the coding method provided in the first aspect of the present invention, when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a pattern of vertical partition, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or partition horizontal; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition. In this mode, the flexibility of a binary tree partition pattern can be fully utilized to improve the coding efficiency.
Petition 870180168063, of 12/27/2018, p. 163/306
13/112 [0021] According to another implementation of the encoding method provided in the first aspect of the present invention, when a quadtree partition is allowed, the restriction subimage processing mode is only available for a block of subimages obtained using a specific partition pattern, to be specific it is only used to process the block of sub-images obtained using the specific partition pattern. For example, the restriction subimage processing mode is available for, that is, applicable to, a block of subimages with the size of Nx2N; but it is unavailable for, that is, inapplicable to, a block of sub-images with the size of 2NxN. In this mode, the flexibility of a processing process can be improved.
[0022] According to another implementation of the encoding method provided in the first aspect of the present invention, the encoding method may further include: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is not allowed, process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a non-restriction subimage processing mode, where the first block of 2NxN subimages and the second block 2NxN subimage block or the first Nx2N subimage block and the second Nx2N subimage block are obtained by partitioning the image block with the size of 2Νχ2Ν. The non-constraint sub-image processing mode includes: determining whether the first block of sub-images needs to be further partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a
Petition 870180168063, of 12/27/2018, p. 164/306
14/112 partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block of sub-images and the first block of partitioned sub-images, where the partition pattern of the first block of sub-images is that of a first set of partition patterns. The non-constraint sub-image processing mode also includes: determining whether the second block of sub-images needs to be further partitioned; and when the second block of sub-images does not need to be additionally partitioned, encode the second block of sub-images to generate a stream of encoded data, or when the second block of sub-images needs to be additionally partitioned, determine a partition pattern of the second block of sub-images, partition the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the partition pattern of the second block of sub-images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is one second set of partition patterns, and all of the partition patterns in the first set of partition patterns are the same as all of the partition patterns in the second set of partition patterns.
[0023] In this processing mode, the following can be ensured: When the quadtree partition pattern cannot be used, for example, according to an existing rule, when a quadtree leaf node is partitioned using a binary tree, leaf nodes obtained through binary tree partition cannot be partitioned using a quadtree, and using a non-restriction subimage processing mode to obtain blocks of subimages with an NxN size is allowed. This can ensure that a gain brought about in a quadtree partition pattern can be
Petition 870180168063, of 12/27/2018, p. 165/306
15/112 fully used for an image.
[0024] According to another implementation of the encoding method provided in the first aspect of the present invention, the restriction subimage processing mode is used to encode a slice I (slice). This can ensure maximum gain.
[0025] In accordance with a second aspect of the present invention, a method of decoding is provided, which includes:
[0026] analyzing a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained by partitioning the image block with the size of 2Nx2N; and [0027] the constraint sub-image processing mode includes: [0028] determining whether the first block of sub-images needs to be further partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images; and [0029] determine whether a second block of sub-images needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the
Petition 870180168063, of 12/27/2018, p. 166/306
16/112 second block of sub-images needs to be additionally partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second block of sub-images partitioned and the first block of sub-images partitioned is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0030] It should be noted that in the above method, one characteristic the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern can also be described as the pattern of partition of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a size of a block of sub-images obtained after at least one of the first block of sub-images or the second block of sub-images is non-NxN . That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[0031] In the present invention, a constraint processing is
Petition 870180168063, of 12/27/2018, p. 167/306
17/112 executed on the first block of 2NxN sub-images and the second block of 2NxN sub-images and / or the first Nx2N sub-image and the second block of Nx2N sub-images in the restriction sub-image processing mode, thereby reducing a redundancy that exists in a quadtree partition process plus binary tree. [0032] According to an implementation of the decoding method provided in the second aspect of the present invention, in the restriction subimage processing mode, the partition pattern of the first block of subimages is of a first set of partition patterns, and the partition pattern of the second block of sub-images is of a second set of partition patterns, where the first set of partition patterns includes at least one partition pattern different from all partition patterns in the second set of partition patterns. For example, the first set of partition patterns can include vertical partition and horizontal partition, and the second set of partition patterns includes only horizontal partition or only vertical partition, that is, the second set of partition patterns is a subset of the first set of partition patterns. Specifically, a first set of partition patterns for the first block of sub-images with the size of 2NxN includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the horizontal partition pattern; and a first set of partition patterns for the first block of subimages with the size of Nx2N includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the vertical partition pattern. This limitation mode can be used to avoid using, in a process of processing the first block of sub-images and the second block of sub-images, a partition pattern of partitioning the block of images 2Nx2N into four blocks of sub-images with a size of
Petition 870180168063, of 12/27/2018, p. 168/306
18/112
ΝχΝ, thereby reducing redundancy. In addition, in a process of performing decoding processing on a second block of sub-images, read code words can be reduced because the number of partition methods used for the second block of sub-images is limited.
[0033] According to another implementation of the decoding method provided in the second aspect of the present invention, if a vertical partition is performed on the first block of 2NxN subimages, when only horizontal partition is allowed for the second block of 2NxN subimages, in the method decoding, only performing decoding to determine whether the second block of 2NxN subimages is additionally partitioned can be allowed, without the need to perform decoding to determine a specific partition pattern of the second block of 2NxN subimages; and if the second block of 2NxN sub-images needs to be further partitioned, the partition pattern of the second block of 2NxN sub-images is horizontal partition by default. In this mode, the code words that need to be read in a decoding process can be further reduced, thereby improving decoding efficiency.
[0034] According to another implementation of the decoding method provided in the second aspect of the present invention, if a horizontal partition is performed on the first block of Nx2N sub-images, when only vertical partition is allowed for the second block of Nx2N sub-images, in the method decoding, only performing decoding to determine whether the second block of Nx2N sub-images is additionally partitioned can be allowed, without the need to perform decoding to determine a specific partition pattern of the second block of Nx2N sub-images; and if the second block of Nx2N subimages needs to be additionally
Petition 870180168063, of 12/27/2018, p. 169/306
19/112 partitioned, the partition pattern of the second block of Nx2N subimages is vertical partition by default. In this mode, the code words that need to be read in a decoding process can be further reduced, thereby improving decoding efficiency.
[0035] According to another implementation of the decoding method provided in the second aspect of the present invention, when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a standard of vertical partition, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or partition horizontal; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition. In this mode, the flexibility of a binary tree partition pattern can be fully utilized to improve the coding efficiency.
[0036] According to another implementation of the decoding method provided in the second aspect of the present invention, when a quadtree partition is allowed, the restriction subimage processing mode is available only
Petition 870180168063, of 12/27/2018, p. 170/306
20/112 for a block of sub-images obtained using a specific partition pattern, to be specific it is used only to process the block of sub-images obtained using the specific partition pattern. For example, the restriction subimage processing mode is available for, that is, applicable to, a block of subimages with the size of Nx2N; but it is unavailable for, that is, inapplicable to, a block of sub-images with the size of 2NxN. In this mode, the flexibility of a processing process can be improved. [0037] According to another implementation of the decoding method provided in the second aspect of the present invention, when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is not allowed, a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images are processed in a non-constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of Nx2N subimages and the second block of Nx2N subimages are obtained by partitioning the image block with the size of 2Νχ2Ν. The constraint subimage processing mode includes: determining whether the first block of subimages needs to be further partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images, where the partition pattern of the first block of sub-images is that of a first set of partition patterns. Processing mode
Petition 870180168063, of 12/27/2018, p. 171/306
21/112 of a constraint subimage further includes: determining whether a second block of subimages needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is that of a second set of partition patterns, and all of the partition patterns in the first set of partition patterns are the same as all of the partition patterns in the second set of partition patterns.
[0038] In this processing mode, the following can be ensured: When the quadtree partition pattern cannot be used, for example, according to an existing rule, when a quadtree leaf node is partitioned using a binary tree, leaf nodes obtained through binary tree partition cannot be partitioned using a quadtree, and using a non-constraint subimage processing mode to obtain subimage blocks with an NxN size is allowed. This can ensure that a gain brought about in a quadtree partition pattern can be fully utilized for an image.
[0039] According to another implementation of the decoding method provided in the second aspect of the present invention, the above restriction subimage processing mode is used to decode a slice I (slice).
[0040] In accordance with a third aspect of the present invention, an encoding method is provided, which includes: when partitioning a block of images with a size of 2Nx2N using a standard
Petition 870180168063, of 12/27/2018, p. 172/306
22/112 quadtree partition is allowed, processing a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a restriction subimage processing mode, where the first block of 2NxN subimages and second block of 2NxN subimages or first block of Nx2N subimages and second block of Nx2N subimages are obtained by partitioning the block of images with the size of 2Nx2N; and [0041] the restriction subimage processing mode includes: [0042] determining a partition pattern of the first subimage, encoding the partition pattern of the first block of images, and encoding the first block of subimages based on the partition pattern the first block of sub-images; and [0043] determine a partition pattern of the second block of sub-images, encode the partition pattern of the second block of images, and encode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from one image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0044] It should be noted that in the method above, one characteristic is the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from a block partition pattern
Petition 870180168063, of 12/27/2018, p. 173/306
23/112 images (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern can also be described as the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a size of a block of sub-images obtained after at least one of the first block of sub-images or the second block of sub-images is partitioned is non-NxN. That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[0045] In the encoding method, the block of sub-images with the size of Nx2N or the block of sub-images with the size of 2NxN is encoded in the restriction sub-image processing mode, thereby reducing the redundancy that exists when an image is partitioned using a quadtree plus a binary tree. [0046] The encoding method provided in the third aspect of the present invention has all the beneficial effects of the encoding method provided in the first aspect of the present invention, and may require less data flow. In addition, unless otherwise specified, the encoding method provided in the third aspect is applicable to all extended implementations of the encoding method provided in the first aspect of the present invention.
[0047] In addition, according to an implementation of the coding method in the third aspect of the present invention, for the first block of sub-images 2NxN and the second block of sub-images 2ΝχΝ, the first set of partition patterns does not include any partitions, one horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not include any partitions
Petition 870180168063, of 12/27/2018, p. 174/306
24/112 and the horizontal partition pattern; and for the first block of Nx2N subimages and the second block of subimages Νχ2Ν, the first set of partition patterns does not include any partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not includes no partitions and the vertical partition pattern.
[0048] According to another implementation of the coding method in the third aspect of the present invention, that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images includes: when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without horizontal or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without partition, partition vertical, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[0049] In accordance with a fourth aspect of the present invention, a decoding method is provided, which includes: analyzing a stream of
Petition 870180168063, of 12/27/2018, p. 175/306
25/112 data, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N subimages in a constraint subimage processing mode, where the first block of 2NxN subimages and the second block of 2NxN subimages or the first block of Nx2N subimages and the second block of Nx2N subimages are obtained by partitioning the image block with the size of 2Nx2N; and [0050] the restriction subimage processing mode includes: [0051] analyzing the data flow to determine a partition identifier of the first block of subimages, determining a partition pattern of the first block of subimages based on the partition identifier the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and [0052] analyzing the data stream to determine a partition identifier of the second block of sub-images, determining a partition pattern of the second block of sub-images based on the partition identifier of the second block of sub-images, and decoding the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0053] It should be noted that in the above method, one characteristic the partition pattern of the second block of subimages is restricted
Petition 870180168063, of 12/27/2018, p. 176/306
26/112 by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second block of sub-images partitioned and the first block of sub-images partitioned is different from a partition pattern of image block (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern can also be described as the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a size of a block of sub-images obtained after at least one of the first block of sub-images or the second block of sub-images is partitioned is non-NxN. That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[0054] The decoding method provided in the fourth aspect of the present invention has all the beneficial effects of the decoding method provided in the second aspect of the present invention, and may require less data flow. In addition, the decoding method provided in the fourth aspect of the present invention is applicable to all extended implementations of the decoding method provided in the second aspect of the present invention.
[0055] According to an implementation of the decoding method in the fourth aspect of the present invention, for the first block of 2NxN sub-images and the second block of sub-images 2 blocoχΝ, the first set of partition patterns does not include any partition, a partition pattern horizontal, and a vertical partition pattern, and the second set of partition patterns does not include any partitions and the horizontal partition pattern; and for the first block of
Petition 870180168063, of 12/27/2018, p. 177/306
27/112 Nx2N sub-images and the second block of Νχ2Ν sub-images, the first set of partition patterns does not include any partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not include any partitions and the vertical partition pattern.
[0056] According to another implementation of the decoding method in the fourth aspect of the present invention, that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images includes: when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without horizontal or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without partition, partition vertical, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[0057] According to a fifth aspect of the present invention, an encoding device is provided, where the encoding device is corresponding to the encoding method provided in the first
Petition 870180168063, of 12/27/2018, p. 178/306
28/112 aspect of the present invention, is configured to implement all implementations included in the coding method provided in the first aspect of the present invention, and includes:
[0058] a restriction encoding determination module, configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of Nx2N sub-images are obtained by partitioning the image block with the size of 2Nx2N; and [0059] a restriction encoding module that is configured to implement the restriction subimage processing mode and that includes:
[0060] a first subimage processing module, configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block subimage and the first partitioned subimage block; and [0061] a second subimage processing module, configured to: determine whether the second subimage block
Petition 870180168063, of 12/27/2018, p. 179/306
29/112 needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partitioning the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0062] In the present invention, the coding apparatus performs a constraint processing on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first Nx2N subimage and the second block of Nx2N subimages in subimage processing mode restriction, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[0063] According to a sixth aspect of the present invention, a decoding apparatus is provided, where the decoding apparatus is corresponding to the decoding method provided in the second aspect of the present invention, it is configured to implement all implementations included in the method of decoding provided in the second aspect of the present invention, and includes:
Petition 870180168063, of 12/27/2018, p. 180/306
30/112 [0064] a restriction decoding determination module, configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block 2NxN subimage block and a second 2NxN subimage block or a first Nx2N subimage block and a second Nx2N subimage block in a constraint subimage processing mode, where the first 2NxN subimage block and the second 2NxN subimage block or the first block of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and [0065] a constraint decoding module that is configured to implement the constraint subimage processing mode and that includes:
[0066] a first subimage processing module, configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images; and [0067] a second subimage processing module, configured to: determine whether the second subimage block needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be additionally
Petition 870180168063, of 12/27/2018, p. 181/306
31/112 partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from a partition pattern block image (default) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern. [0068] The decoding apparatus provided in this implementation of the present invention performs a constraint processing on the first block of sub-images 2NxN and the second block of sub-images 2NxN and / or the first sub-image Nx2N and the second block of sub-images Nx2N in processing mode of constraint subimage, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[0069] According to an implementation of the decoding apparatus provided in the sixth aspect of the present invention, the restriction decoding determination module is further configured for: when partitioning the 2Nx2N image block using a quadtree partition pattern it is not allowed , to process encoded data streams of the first block of sub-images and the second block of sub-images in a non-restriction sub-image processing mode; and correspondingly, the decoding apparatus further includes: a non-restriction decoding module which is configured to implement the non-restriction subimage processing mode and which includes:
[0070] a third subimage processing module, configured to: analyze the data flow to determine a
Petition 870180168063, of 12/27/2018, p. 182/306
32/112 partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block sub-images; and [0071] a fourth subimage processing module, configured to: analyze the data flow to determine a partition identifier of the second block of subimages, determine a partition pattern of the second block of subimages based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images are selected from the same set of partition patterns.
[0072] According to a seventh aspect of the present invention, an encoding apparatus is provided, where the encoding apparatus is corresponding to the encoding method provided in the third aspect of the present invention, is configured to implement all implementations included in the method of coding provided in the third aspect of the present invention, and includes:
[0073] a restriction encoding determination module, configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of
Petition 870180168063, of 12/27/2018, p. 183/306
33/112 imagχ2Ν sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and [0074] a restriction encoding module that is configured to implement the restriction subimage processing mode and that includes:
[0075] a first sub-image processing module, configured to: determine a partition pattern of the first block of sub-images, encode the partition pattern of the first block of images, and encode the first block of sub-images based on the partition pattern of the first block of sub-images; and [0076] a second sub-image processing module, configured to: determine a partition pattern of the second block of sub-images, encode the partition pattern of the second block of images, and encode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern of partition block of images (pattern) obtained for the second block of sub-images partitioned and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[0077] In the present invention, the coding device performs a restriction processing on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first block of Nx2N subimages and the second block of Nx2N subimages in processing mode of constraint subimage, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[0078] According to an eighth aspect of the present invention, a
Petition 870180168063, of 12/27/2018, p. 184/306
34/112 decoding apparatus is provided, where the decoding apparatus is corresponding to the decoding method provided in the fourth aspect of the present invention, is configured to implement all implementations included in the decoding method provided in the fourth aspect of the present invention, and includes : [0079] a restriction decoding determination module, configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of sub-images 2NxN and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of Nx2N subimages and the second block of Nx2N subimages are obtained by partitioning the block of images with the size of 2Nx2N; and [0080] a constraint decoding module that is configured to implement the constraint subimage processing mode and that includes:
[0081] a first sub-image processing module, configured to: analyze the data flow to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and [0082] a second subimage processing module, configured to: analyze the data flow to determine a partition identifier of the second subimage block, determine a partition pattern of the second subimage block based on the
Petition 870180168063, of 12/27/2018, p. 185/306
35/112 partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the block 2Nx2N images to be partitioned using the quadtree partition pattern.
[0083] The decoding apparatus provided in this implementation of the present invention performs a constraint processing on the first block of sub-images 2NxN and the second block of sub-images 2NxN and / or the first sub-image Nx2N and the second block of sub-images Nx2N in processing mode of constraint subimage, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[0084] According to an implementation of the decoding apparatus provided in the eighth aspect of the present invention, the restriction decoding determination module is further configured for: when partitioning the 2Nx2N image block using a quadtree partition pattern it is not allowed , to process encoded data streams of the first block of sub-images and the second block of sub-images in a non-restriction sub-image processing mode; and correspondingly, the decoding apparatus further includes: a non-restriction decoding module which is configured to implement the non-restriction subimage processing mode and which includes:
[0085] a third subimage processing module, configured to: analyze the data flow to determine a
Petition 870180168063, of 12/27/2018, p. 186/306
36/112 partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block sub-images; and [0086] a fourth subimage processing module, configured to: analyze the data flow to determine a partition identifier of the second block of subimages, determine a partition pattern of the second block of subimages based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images are selected from the same set of partition patterns.
[0087] According to the encoding method and apparatus and the decoding method and apparatus that are provided in the implementations of the present invention, in a scenario of partitioning an image using a quadtree plus a binary tree, a redundancy that exists in the partition of quadtree plus binary tree is eliminated by introducing a constraint subimage processing mode, thereby reducing the encoding and decoding complexity and improving the encoding and decoding efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS [0088] To describe the technical solutions in the modalities of the present invention more clearly, the following briefly describes the accompanying drawings required to describe the modalities. Apparently, the accompanying drawings in the following description merely show some modalities of
Petition 870180168063, of 12/27/2018, p. 187/306
37/112 present invention, and a person skilled in the art can still derive other designs from these accompanying drawings without creative efforts.
[0089] Figure 1 is a schematic block diagram of a video encoding system according to an embodiment of the present invention;
[0090] Figure 2 is a schematic diagram of a video encoding apparatus according to an embodiment of the present invention;
[0091] Figure 3 is a schematic block diagram of another video encoding and decoding system according to an embodiment of the present invention;
[0092] Figure 4 is a schematic block diagram of a video encoder according to an embodiment of the present invention;
[0093] Figure 5 is a schematic block diagram of a video decoder according to an embodiment of the present invention;
[0094] Figure 6 is a schematic apparatus diagram of a video encoder according to an embodiment of the present invention;
[0095] Figure 7 is a schematic diagram of a quadtree partition structure plus binary tree;
[0096] Figure 8 is a schematic apparatus diagram of a video decoder according to an embodiment of the present invention;
[0097] Figure 9 is a schematic diagram of partitioning a block of images with a size of 2Nx2N into a subimage block with a size of NxN using a quadtree partition pattern;
[0098] Figure 10 is a schematic diagram of partitioning a
Petition 870180168063, of 12/27/2018, p. 188/306
38/112 block of images with a size of 2Nx2N in block of images with a size of NxN using a binary tree partition pattern;
[0099] Figure 11 is a schematic diagram of subimage block processing sequences in different partition patterns used when a block of images with a size of 2Nx2N is partitioned into a subimage block with an NxN size using a partition pattern of quadtree plus binary tree; [00100] Figure 12 is a schematic diagram of reference blocks available in different subimage processing sequences in different partition patterns used when a block of images with a size of 2Nx2N is partitioned into blocks of subimages with a size of NxN using a quadtree partition pattern plus binary tree;
[00101] Figure 13 is a schematic flow chart of a method of implementing a coding method according to an embodiment of the present invention;
[00102] Figure 14 is a schematic flow chart of a method of implementing a decoding method according to an embodiment of the present invention;
[00103] Figure 15 is a schematic flowchart of a method of implementing a coding method according to another embodiment of the present invention;
[00104] Figure 16 is a schematic flow chart of a method of implementing a decoding method according to another embodiment of the present invention;
[00105] Figure 17 is a schematic block diagram of a coding apparatus according to an embodiment of the present invention;
[00106] Figure 18 is a schematic block diagram of a
Petition 870180168063, of 12/27/2018, p. 189/306
39/112 decoding apparatus according to an embodiment of the present invention;
[00107] Figure 19 is a schematic block diagram of a coding apparatus according to another embodiment of the present invention;
[00108] Figure 20 is a schematic block diagram of a decoding apparatus according to another embodiment of the present invention;
[00109] Figure 21 is a schematic structural diagram of an applicable television application according to an embodiment of the present invention; and [00110] Figure 22 is a schematic structural diagram of an applicable mobile phone application according to an embodiment of the present invention.
DESCRIPTION OF MODALITIES [00111] The following clearly describes the technical solutions in the modalities of the present invention with reference to the accompanying drawings in the modalities of the present invention. Apparently, the described modalities are some but not all of the modalities of the present invention. All other modalities obtained by a person skilled in the art based on the modalities of the present invention without creative efforts should fall within the scope of protection of the present invention.
[00112] Figure 1 is a schematic block diagram of a 50 video codec device or an electronic device 50. The electronic device or device can be integrated into a codec in the modalities of the present invention. Figure 2 is a schematic diagram of a video encoding apparatus according to an embodiment of the present invention. The following describes units in Figure 1 and Figure 2.
Petition 870180168063, of 12/27/2018, p. 190/306
40/112 [00113] The electronic device 50 can be, for example, a mobile terminal or user equipment in a wireless communications system. It should be understood that the modalities of the present invention can be implemented by any electronic device or apparatus that may need to encode and decode, or encode, or decode a video image.
[00114] The apparatus 50 may include a housing 30 that is configured to be integrated into and protect a device. The apparatus 50 may further include a display 32 in the form of a liquid crystal display. In another embodiment of the present invention, the display can be any display technology suitable for displaying an image or a video. The apparatus 50 may further include a keyboard 34. In another embodiment of the present invention, any suitable data mechanism or any suitable user interface can be used. For example, a user interface can be implemented as a virtual keyboard, or a data recording system is used as a component of a touch-sensitive display. The apparatus may include a microphone 36 or any appropriate audio input. The audio input can be a digital or analog signal input. Apparatus 50 may further include an audio output device. In this embodiment of the present invention, the audio output device can be any of the following devices: a headset 38, the speaker, or an analog or digital audio output connector. Apparatus 50 may further include a battery 40. In another embodiment of the present invention, a device may be powered by any suitable mobile power device, for example, a solar cell, a fuel cell, or a clock mechanism generator. The apparatus may also include an infrared port 42 used for short-range communication in line of sight with another device. In another embodiment, the apparatus 50 may also include
Petition 870180168063, of 12/27/2018, p. 191/306
41/112 any appropriate short-range communication solution, for example, a Bluetooth wireless connection or a USB / live line connection.
[00115] Apparatus 50 may include a controller 56 or a processor configured to control apparatus 50. Controller 56 may be connected to memory 58. In this embodiment of the present invention, memory may store data that is in the form of a image and data that are in an audio form, and / or can store an instruction to be implemented by controller 56. Controller 56 may also be connected to a codec circuit 54 that is suitable for implementing audio data encoding and decoding and / or video or that is used for encoding and decoding implemented with the aid of the controller 56.
[00116] The device 50 may further include a card reader 48 and a smart card 46 that are configured to provide user information and are suitable for providing authentication information used to perform user authentication and authorization on a network, for example, a UICC and a UICC reader.
[00117] The apparatus 50 may also include a radio interface circuit 52. The radio interface circuit is connected to the controller and is suitable for generating, for example, a wireless communication signal used for communication with a communications network cell phone, wireless communication system or wireless local area network. The apparatus may also include an antenna 44. The antenna is connected to the radio interface circuit 52, to send a radio frequency signal generated by the radio interface circuit 52 to other (a plurality of) devices and to receive frequency signals radio from the other (a plurality of) devices.
[00118] In some embodiments of the present invention, the apparatus
Petition 870180168063, of 12/27/2018, p. 192/306
42/112 includes a camera capable of recording or detecting a single frame, and codec 54 or the controller receives and processes the single frame. In some embodiments of the present invention, the apparatus receives video image data to be processed from another device before carrying out transmission and / or storage. In some embodiments of the present invention, the apparatus 50 may receive an image via a wireless or wired connection, to perform encoding / decoding.
[00119] Figure 3 is a schematic block diagram of another video encoding and decoding system 10 according to an embodiment of the present invention. As shown in Figure 3, the video encoding and decoding system 10 includes a source device 12 and a destination device 14. The source device 12 generates encoded video data. Therefore, the source device 12 can be referred to as a video encoding device or a video encoding device. The destination device 14 can decode the encoded video data generated by the source device 12. Therefore, the destination device 14 can be referred to as a video decoding device or a video decoding device. The source device 12 and the destination device 14 can be examples of video encoding and decoding devices or examples of video encoding and decoding devices. The source device 12 and the destination device 14 may include a device in a broad sense that includes a desktop computer, a mobile computing device, a notebook computer (e.g. laptop), a tablet computer, a decoder, a portable phone such as a smartphone, television, camera, display, digital media player, video game console, in-vehicle computer, or similar device.
[00120] The target device 14 can receive the video data
Petition 870180168063, of 12/27/2018, p. 193/306
43/112 encoded from the source device 12 using a channel 16. Channel 16 can include one or more means and / or devices that move the encoded video data from the source device 12 to the destination device
14. In one example, channel 16 may include one or more means of communication that enable source device 12 to directly transmit encoded video data to destination device 14 in real time. In this example, the source device 12 can modulate the video data encoded according to a communications standard (for example, the wireless communications protocol), and can transmit the modulated video data to the destination device 14. The one or more means of communication may include a means of wireless and / or wired communications, for example, a radio frequency (RF) spectrum or one or more physical transmission lines. The one or more means of communication can form a component of a packet-based network (for example, a local area network, a wide area network, or a global network (for example, the Internet)). The one or more means of communication may include a router, a switch, a base station, or other devices that promote communication from the source device 12 to the destination device 14. [00121] In another example, channel 16 may include a storage medium that stores the encoded video data generated by the source device 12. In this example, the destination device 14 can access the storage medium via disk access or card access. The storage medium may include a plurality of locally accessible data storage media, for example, a Blu-ray disc, a DVD, a CD-ROM, an instant memory, or other appropriate digital storage medium that is configured for store encoded video data.
[00122] In another example, channel 16 may include a file server or other intermediate storage device that
Petition 870180168063, of 12/27/2018, p. 194/306
44/112 stores the encoded video data generated by the source device
12. In this example, the destination device 14 can access, via streaming or loading, the encoded video data stored on the file server or other intermediate storage device. The file server can be a type of server that can store encoded video data and can transmit encoded video data to the target device 14. An exemplary file server includes a web server (for example, used for a website), a File Transfer Protocol (FTP) server, a network attached storage device (NAS), and a local disk drive.
[00123] The destination device 14 can access the encoded video data through a standard data connection (for example, an Internet connection). An exemplary type of data connection includes a wireless channel (for example, a Wi-Fi connection), a wired connection (for example, a DSL or cable modem) suitable for accessing the encoded video data stored on the server of files, or a combination thereof. The transmission of encoded video data from the file server can be a stream, a download stream, or a combination thereof.
[00124] A technology of the present invention is not limited to a wireless application scenario. For example, the technology can be applied to video encoding and decoding that supports a plurality of types of multimedia applications such as the following applications: air television broadcast, cable television broadcast, satellite television broadcast, broadcast streaming video (eg via the Internet), encoding video data stored on the data storage medium, decoding video data stored on the data storage medium, or other application. In some instances, the
Petition 870180168063, of 12/27/2018, p. 195/306
45/112 video encoding and decoding system 10 can be configured to support unidirectional or bidirectional video transmission and support an application such as video streaming, video playback, video transmission and / or video telephony. [00125] For example in Figure 3, the source device 12 includes a video source 18, a video encoder 20, and an output interface 22. In some examples, the output interface 22 may include a modulator / demodulator ( modem) and / or a transmitter. Video source 18 can include a video capture device (for example, a video camera), a stored video file that includes previously captured video data, a video input interface configured to receive video data from a video content provider, and / or a computer graphics system configured to generate video data, or a combination of the above video data sources.
[00126] Video encoder 20 can encode video data from video source 18. In some examples, source device 12 directly transmits encoded video data to destination device 14 using output interface 22. The encoded video data can be additionally stored on the storage medium or on the file server, for later access by the destination device 14 to perform decoding and / or playback. [00127] In the example in Figure 3, the target device 14 includes an input interface 28, a video decoder 30, and a display device 32. In some examples, the input interface 28 includes a receiver and / or a modem. Input interface 28 can receive encoded video data using channel 16. Display device 32 may be integrated with destination device 14 or may be outside destination device 14. Display device 32 usually displays data from decoded video. Display device 32 can
Petition 870180168063, of 12/27/2018, p. 196/306
46/112 include a plurality of types of display devices, for example, a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, and other known display devices.
[00128] Video encoder 20 and video decoder 30 can perform operations according to a video compression standard (for example, the H.265 High Efficiency Video Encoding standard), and can conform to the HEVC (HM) test. The ITU-T H.265 (V3) (04/2015) text description of the H.265 standard was released on April 29, 2015 and can be downloaded from http://handle.itu.int/11.1002/1000/ 12455. The entire contents of the file are incorporated into this specification by reference.
[00129] Alternatively, video encoder 20 and video decoder 30 can perform operations according to other proprietary or industry standards, and standards include ITU-T H.261, ISO / IECMPEG-1 Visual, ITU-T H.262 or ISO / IECMPEG-2 Visual, ITUT H.263, ISO / IECMPEG-4 Visual, ITU-T H.264 (also referred to as ISO / IECMPEG-4 AVC), and include scalable video encoding extension ( SVC) and multivision video encoding (MVC) extension. It should be understood that the technology of the present invention is not limited to any specific standard or encoding or decoding technology.
[00130] Furthermore, Figure 3 is merely an example, and the technology of the present invention is applicable to a video encoding / decoding application (for example, video encoding or one-sided video decoding) that does not necessarily include any communication between a coding device and a decoding device. In another example, data is retrieved from local memory, and the data is transmitted in a streaming mode over a network, or the data is operated in a
Petition 870180168063, of 12/27/2018, p. 197/306
47/112 similar. The encoding device can encode data and store the data in a memory, and / or the decoding device can retrieve data from a memory and decode the data. In many instances, encoding and decoding is performed by a plurality of devices that do not perform mutual communication but rather encode data and store the encoded data in memory, and / or retrieve data from memory and decode the data. [00131] Video encoder 20 and video decoder 30 each can be implemented as any one of a plurality of appropriate circuits, such as one or more microprocessors, a digital signal processor (DSP), an application integrated circuit (ASIC), a network of field programmable ports (FPGA), discrete logic, hardware, or any combination thereof. If the technology is partially or completely implemented using software, the apparatus can store software instructions on an appropriate non-instantaneous computer-readable storage medium, and can execute instructions on hardware using one or more processors, to execute the technology of the present invention. Any of the above (including hardware, software, a combination of hardware and software, and the like) can be considered as one or more processors. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders. Any of the video encoder 20 and video decoder 30 can be integrated as a component of a combined encoder / decoder (codec (CODEC)) from another device.
[00132] The present invention can substantially refer to that the video encoder 20 sends, using a signal, a portion of information to another device (for example, the video decoder 30). The term sends, using a signal can substantially refer
Petition 870180168063, of 12/27/2018, p. 198/306
48/112 to the transfer of an element of syntax and / or encoded video data. The transfer can take place in real time or approximately in real time. Alternatively, communication can occur based on a time interval, for example, when a syntax element is stored in a computer-readable storage medium during encoding using binary data obtained through encoding. After being stored in this medium, the syntax element can be searched by the decoding device at any time.
[00133] Video encoder 20 encodes video data. Video data can include one or more images. The video encoder 20 can generate a data stream. The data stream includes video data encoding information in the form of a bit stream. The encoding information can include encoded image data and relative data. Relative data can include a set of sequence parameters (SPS), a set of image parameters (PPS) and another syntax structure. The SPS can include a parameter applied to zero or a plurality of strings. The PPS can include a parameter applied to zero or a plurality of images. The syntax structure is a set of zero or a plurality of syntax elements arranged in the data stream in a specified order.
[00134] To generate encoding information for an image, the video encoder 20 can partition the image into grids in a form of block of encoding trees (CTB). In some examples, a CTB may be referred to as a tree block, a larger coding unit (LCU), or a coding tree unit. The CTB is not limited to a specific size, and may include one or more encoding units (CU). Each CTB can be associated with pixel blocks of equal size in the image. Each
Petition 870180168063, of 12/27/2018, p. 199/306
49/112 pixels in a pixel block can be corresponding to a luminance sample (luminance or luma) and two chrominance samples (chrominance or chroma). Therefore, each CTB can be associated with a luminance sampling block and two chrominance sampling blocks. An image CTB can be partitioned into one or more slices. In some examples, each slice includes an entire quantity of CTBs. During image encoding, video encoder 20 can generate encoding information for each slice of the image, i.e., encoding a CTB within the slice. To encode the CTB, the video encoder 20 recursively partitions the quadtree over the pixel block associated with the CTB, to partition the pixel block into pixel blocks of decreasing sizes. The smallest blocks of pixels can be associated with CU.
[00135] Figure 4 is a schematic block diagram of a video encoder 20 according to an embodiment of the present invention. The video encoder 20 includes an encoder side prediction module 201, a transform and quantization module 202, an entropy encoding module 203, an encoder side reconstruction module 204, and a filter side encoder 205. Figure 5 is a schematic block diagram of a video decoder 30 according to an embodiment of the present invention. The video decoder 30 includes a decoder side prediction module 206, an inverse transform and reverse quantization module 207, an entropy decoding module 208, a decoder side reconstruction module 209, and a filter module of the decoder side 210. Details are as follows.
[00136] The prediction module on the encoder side and the prediction module on the decoder side 206 are configured to generate predictive data. The video encoder 20 can generate one or more prediction units (PU) for each CU that is no longer partitioned.
Petition 870180168063, of 12/27/2018, p. 200/306
50/112
Each PU of a CU can be associated with a pixel block, different from the PU, within a pixel block of the CU. The video encoder 20 can generate a predictive pixel block in terms of each CU of the CU. The video encoder 20 can generate the predictive pixel block for the PU through intraprediction or interpretation. If the video encoder 20 generates the predictive pixel block for the PU through intraprediction, the video encoder 20 can generate the predictive pixel block for the PU based on a pixel obtained after an image associated with the PU is decoded. If the video encoder 20 generates the PU predictive pixel block through interpretation, the video encoder 20 can generate the PU predictive pixel block based on a pixel obtained after one or more different images of an image associated with the PU to be decoded. The video encoder 20 can generate a residual CU pixel block based on the predictive pixel block for the CU PU. The residual CU pixel block can indicate a difference between a sampling value in the predictive pixel block for the CU PU and a corresponding sampling value in an initial CU pixel block.
[00137] The transform and quantization module 202 is configured to process predicted residual data. The video encoder 20 can perform a recursive quadtree partition over the CU residual pixel block, to partition the CU residual pixel block into one or more smaller residual pixel blocks associated with a CU transform unit (TU) . Each pixel in a pixel block associated with a TU is corresponding to a luminance sample and two chrominance samples; therefore, each TU can be associated with a residual luminance sampling block and two residual chrominance sampling blocks. The video encoder 20 can perform one or more transformations on a residual sampling block associated with the TU, so
Petition 870180168063, of 12/27/2018, p. 201/306
51/112 generating a block of coefficients (namely, a block of coefficients). The transformation can be a DCT transform or a variant thereof. Using a DCT transform matrix, a two-dimensional transformation is calculated using a one-dimensional transformation in horizontal and vertical directions, in order to obtain the coefficient block. The video encoder 20 can run a quantization program on each coefficient in the coefficient block. Quantization usually means that a coefficient is quantized to reduce the amount of data used to represent the coefficient, in order to provide an additional compression process. The inverse transform and reverse quantization module 207 perform a reverse process of the transformation and quantization module 202.
[00138] The video encoder 20 can generate a set of syntax elements that represent coefficients in a quantized coefficient block. The video encoder 20 can perform an entropy encoding operation (for example, a context-adaptive binary arithmetic encoding operation (CABAC)) on some or all elements of the syntax using the entropy encoding module 203. To perform an CABAC encoding on the syntax elements, the video encoder 20 can binarize the syntax elements to form a binary sequence that includes one or more bits (referred to as binary bits). The video encoder 20 can encode some of the binary bits using regular (regular) encoding, and can encode other of the binary bits using offset (bypass) encoding.
[00139] In addition to performing entropy coding on the syntax elements in the coefficient block, the video encoder 20 can perform inverse and inverse transformed quantization on a transformed coefficient block using the reconstruction module on the encoder side 204, to reconstruct a sampling block
Petition 870180168063, of 12/27/2018, p. 202/306
52/112 residual of the transformed coefficient block. The video encoder 20 can add the reconstructed residual sampling block to a sampling block that corresponds to one or more predictive sampling blocks, to generate a reconstructed sampling block. By reconstructing each color component sampling block, the video encoder 20 can reconstruct a pixel block associated with a TU. In this mode, the reconstruction of a block of pixels for each CU of the CU is performed until the reconstruction of an entire block of pixels of the CU is completed.
[00140] After reconstructing the CU pixel block, video encoder 20 performs an unlock filtering operation using an encoder side filter module 205, to reduce a blocking effect of the pixel block associated with CU . After performing the unlock filtering operation, the video encoder 20 can perform an adaptive sampling offset (SAO) to modify a reconstructed pixel block of an image CTB. After performing the operations, the video encoder 20 can store the reconstructed pixel block of the CU in a temporary storage of decoded images, to generate a predictive pixel block for another CU.
[00141] The video decoder 30 can receive a data stream. The data stream includes, in a form of a bit stream, encoding information for video data encoded by the video encoder 20. The video decoder 30 analyzes the data flow using the entropy decoding module 208 to extract a syntax element of the data stream. When the video decoder 30 performs a CABAC decode, the video decoder 30 can perform a regular decoding on some binary bits and can perform an offset decoding on other binary bits. There is a mapping relationship between the binary bits and the
Petition 870180168063, of 12/27/2018, p. 203/306
53/112 data flow syntax element, and the syntax element is obtained by analyzing the binary bits.
[00142] The video decoder 30 can reconstruct, using the reconstruction module on the decoder side 209, an image of the video data based on the syntax element extracted from the data stream. The process of reconstructing the video data based on the syntax element is substantially the reverse of a process of performing an operation by the video encoder 20 to generate a syntax element. For example, video decoder 30 can generate a predictive pixel block for a CU of a CU based on a syntax element associated with the CU. In addition, the video decoder 30 can reversibly quantize a coefficient block associated with the TU for the CU. The video decoder 30 can perform an inverse transform on the block of inversely quantized coefficients, to reconstruct a residual pixel block associated with the TU for the CU. The video decoder 30 can reconstruct a pixel block from the CU based on the predictive pixel locus and the residual pixel block.
[00143] After rebuilding the CU pixel block, the video decoder 30 performs an unlock filtering operation using the decoder side filter module 210, to reduce a blocking effect of the pixel block associated with the CU. In addition, based on one or more elements of the SAO syntax, the video decoder 30 can perform the same SAO operation as the video encoder 20. After performing these operations, the video decoder 30 can store a block of pixels of CU in temporary storage of decoded images. Temporary storage of decoded images can provide a reference image used for subsequent motion compensation, intraprediction and presentation of a display device.
Petition 870180168063, of 12/27/2018, p. 204/306
54/112 [00144] Figure 6 is a page 36 block diagram of an example of a video encoder 20 configured to implement a technology of the present invention. It should be understood that Figure 6 is an example and should not be considered as a limitation on a technology that is widely exemplified and described by the present invention. As shown in Figure 6, video encoder 20 includes a prediction processing unit 100, a residue generation unit 102, a transform processing unit 104, a quantization unit 106, an inverse quantization unit 108, a reverse transform processing unit 110, a reconstruction unit 112, a filter unit 113, a decoded image buffer 114, and an entropy coding unit 116. Entropy coding unit 116 includes a coding machine / regular CABAC decoding 118 and a bypass encoding / decoding machine 120. The prediction processing unit 100 includes an interpreting processing unit 121 and an intraprediction processing unit 126. Interpreting processing unit 121 includes a unit movement estimate 122 and a movement compensation unit 124. In Oct. In the example, video encoder 20 may include more or less or different function components.
[00145] Video encoder 20 receives video data. To encode the video data, the video encoder 20 can encode each slice of each image of the video data. During slice encoding, video encoder 20 can encode each CTB in the slice. During CTB encoding, the prediction processing unit 100 can perform, in accordance with the international video encoding standard H.265 / HEVC, a quadtree partition on a block
Petition 870180168063, of 12/27/2018, p. 205/306
55/112 pixels associated with a CTB, to partition the pixel block into smaller pixel blocks. For example, in intrapredicting, the prediction processing unit 100 can partition a block of pixels from the CTB into four sub-blocks of the same size. The recursive quadtree partition can continue to be executed on one or more of the sub-blocks to obtain four sub-blocks with the same size, in order to obtain a block of images on which the encoding can be performed and which is also referred to as a coding block (CU). In interpretation, the prediction processing unit 100 partitions a CU into a CTB into one, two, or four PUs based on eight non-recursive partition patterns.
[00146] The video encoder 20 can encode a CU in a CTB of an image to generate CU encoding information. Video encoder 20 can encode the CTB CU in a zigzag scan order. In other words, the video encoder 20 can encode the CU in an order: an upper left CU, an upper right CU, a lower left CU, and a lower right CU. When the video encoder 20 encodes the CUs obtained through partitioning, the video encoder 20 can encode, in the order of zigzag scanning, the CUs associated with subblock blocks of pixels of the CUs obtained through partition.
[00147] In addition, the prediction processing unit 100 can partition a block of pixels from a CU into one or more PUs of the CU. The video encoder 20 and a video decoder 30 can support PUs of various sizes. Assuming that a specific CU size is 2Nx2N, in an existing H.265 / HEVC video encoding standard, video encoder 20 and video decoder 30 can support a PU with a size of 2Nx2N or NxN to perform intraprediction ; and support a symmetric PU with a size of 2Nx2N, 2NxN, Nx2N, NxN, or a similar size for
Petition 870180168063, of 12/27/2018, p. 206/306
56/112 perform interpretation. The video encoder 20 and video decoder 30 can further support an asymmetric PU with a size of 2NxnU, 2NxnD, nl_x2N, or nRx2N to perform interpretation.
[00148] However, a major problem with such a CTB / CU partition pattern using the H.265 / HEVC video encoding standard is that a block of images to be encoded can be only a square block in intraprediction, the block of images to be encoded can be a rectangular block in interpretation, but a CU is partitioned into PUs in interpretation in a non-recursive mode. Therefore, a form of interpretation PU is also greatly limited. To optimize the flexibility of partitioning a video's encoding block, a quadtree plus binary tree (Quadtree Plus Binary Tree, QTBT) partition method emerges.
[00149] The method is specifically: first perform recursive quadtree partition on a block of images (for example, a CTU), and then perform recursive partition on each quadtree leaf node using a binary tree. During the quadtree partition, a quadtree leaf node must be no less than minQTSize. During a binary tree partition, a root node in a binary tree partition must be no greater than maxBTSize and no less than minBTSize, and a binary tree partition depth does not need to exceed maxBTDepth. The binary tree partition includes horizontal binary tree partition and vertical binary tree partition, that is, partition a current image block into two blocks of equal images in equal size in a horizontal direction or a vertical direction. The quadtree partition structure plus binary tree is shown in Figure 7. Assuming that an image block size (which can be either a CTU or a CTB) is 128x128, MinQTSize is 16x16, MaxBTSize is 64x64, MinBTSize (a width and a height) is
Petition 870180168063, of 12/27/2018, p. 207/306
57/112
4, and MaxBTDepth is 4. First, a quadtree partition is performed on the image block to obtain the quadtree leaf nodes. A quadtree leaf node size can range from 16x16 (ie, MinQTSize) to 128x128 (ie, the size of the image block). If the size of the quadtree leaf node is 128x128, no binary tree partition is performed over the quadtree leaf node. This is because the size of the quadtree leaf node exceeds MaxBTSize (ie, 64x64). In another case, a binary tree partition is additionally performed on the quadtree leaf node. In this case, the quadtree leaf node is a head node of a binary tree, and a depth of the binary tree is 0. When a depth of the binary tree reaches MaxBTDepth (ie, 4), no additional partitions are performed. When a width of the binary tree equals MinBTSize (ie 4), no additional horizontal partitions are performed. Similarly, when a height of the binary tree equals MinBTSize (ie, 4), no additional vertical partitions are performed. As shown in Figure 7, the left figure shows a block partition obtained using QTBT, and the right figure shows a corresponding tree structure. A solid line represents quadtree partition, and a dotted line represents binary tree partition. A specific method for marking the quadtree partition pattern plus binary tree method in an encoding process can be as follows:
[00150] (a) If a quadtree partition can be used for a current image block, that is, a block size is not less than minQTSize, and no binary tree partition has been performed before, an identifier A is encoded , where 0 indicates that no quadtree partitions are performed; and 1 indicates which quadtree partition is performed.
[00151] (b) If a binary tree partition can also be
Petition 870180168063, of 12/27/2018, p. 208/306
58/112 used for a current block, that is, a current block size is not less than maxBTSize and not greater than maxBTSize, and a depth of a binary tree does not exceed maxBTDepth, an identifier B is encoded, where 0 indicates that no binary tree partition is performed; nonzero indicates that a binary tree partition is performed. If the identifier is nonzero, it indicates that a binary tree partition has been performed. Another value must be encoded to indicate the horizontal partition or vertical partition. 1 represents horizontal binary tree partition, and 2 represents vertical binary tree partition. For example, for a form of representation of identifier B, 0 indicates that no binary tree partition is performed, 10 indicates that a horizontal binary tree partition is performed, and 11 indicates that a vertical binary tree partition is performed.
[00152] The partition executed on the image block through quadtree partition plus binary tree can improve the coding flexibility and coding efficiency.
[00153] Interpretation processing unit 121 can perform interpretation on each PU in a CU to generate predictive PU data. Predictive PU data can include a predictive pixel block that corresponds to the PU and PU motion information. A slice can be a slice I, a slice P, or a slice B. The interpreter unit 121 can perform a different operation on the PU of the CU depending on whether the PU is in slice I, in slice P, or in slice B. In slice I, intraprediction is performed on all PUs.
[00154] If the PU is in slice P, the motion estimation unit 122 can search for reference images in a reference image list (for example list 0) for a reference block for PU. The reference block for the PU can be a pixel block that is most closely corresponding to a pixel block for the PU. THE
Petition 870180168063, of 12/27/2018, p. 209/306
59/112 motion estimation unit 122 can generate a reference image index of a reference image that indicates a reference block that includes a PU in list 0 and a motion vector that indicates a spatial shift between the pixel block and the reference block for the PU. The motion estimation unit 122 can output the reference image index and the motion vector as motion information from the PU. The motion compensation unit 124 can generate the predictive pixel block for the PU based on the reference block indicated by the PU motion information.
[00155] If the PU is in slice B, the motion estimation unit 122 can perform unidirectional interpretation or bidirectional interpretation over the PU. To perform unidirectional interpretation on the PU, the motion estimation unit 122 can search for reference images in a first reference image list (list 0) or a second reference image list (list 1) to find a reference block for the PU. The motion estimation unit 122 can output the following as the motion information: a reference image index that indicates a location of a reference image, including a reference block, in list 0 or list 1, a motion vector which indicates a spatial shift between a pixel block for the PU and the reference block, and a predicted direction indicator that indicates whether a reference image is in list 0 or list 1. To perform a bidirectional interpretation on the PU, the motion estimate unit 122 can search for reference images in list 0 to find a reference block for the PU, and it can also search for reference images in list 1 to find another reference block for the PU. The motion estimation unit 122 can generate reference image indices that indicate location, in a
Petition 870180168063, of 12/27/2018, p. 210/306
60/112 reference image that includes the reference block, in list 0 and list
1. In addition, the motion estimation unit 122 can generate a motion vector that indicates a spatial shift between the reference block and the pixel block for the PU. PU motion information can include the reference image indexes and the PU motion vector. The motion compensation unit 124 can generate the predictive pixel block for the PU based on the reference block indicated by the PU motion information.
[00156] The intraprediction processing unit 126 can perform an intraprediction over the PU to generate PU predictive data. Predictive PU data can include a predictive pixel block and various PU syntax elements. The intraprediction processing unit 126 can perform an intraprediction on the PUs in slice I, slice P, or slice B.
[00157] To perform an intraprediction over the PU, the intraprediction processing unit 126 can generate a plurality of PU predictive data sets in a plurality of intraprediction modes. To generate a set of predictive PU data in an intraprediction mode, the intraprediction processing unit 126 can span a PU sampling block in one direction associated with the intraprediction mode, to extend the sampling of a PU sampling block. a neighboring PU. Assuming that a left-to-right and top-to-bottom coding order is used for a PU, a CU, and a CTB, a neighboring PU can be above the PU, at the top right of the PU, at the top left of the PU , or on the left of the PU. The intraprediction processing unit 126 may use sets of intraprediction modes including different amounts of intraprediction modes, for example, 33 directional intraprediction modes. In some instances, a number of intraprediction modes may depend on a
Petition 870180168063, of 12/27/2018, p. 211/306
61/112 size of a pixel block for the PU.
[00158] Prediction processing unit 100 can select predictive data from a PU from a CU from predictive data generated for the PU by the interpreting processing unit 121 or predictive data generated for the PU by the intraprediction processing unit 126. In some examples, the prediction processing unit 100 can select the PU PU predictive data based on a rate / distortion measure from a predictive data set. For example, a Lagrange cost function is used to perform the selection between a coding scheme and its parameter value (for example, a motion vector, a reference index, and an intrapredictive direction). In such a cost function, a lambda weighting factor is used to associate the actual or estimated image distortion caused by a lossy encoding method and the actual or estimated amount of information required to represent a pixel value in an image area. : C = D + lambda χ R, where C represents Lagrange costs to be minimized, D represents image distortion (for example, an average squared error) in a mode and with its parameter, and R represents a number of bits ( for example, including the amount of data used to represent a candidate motion vector) required to reconstruct a block of images by a decoder. A lower cost coding scheme is usually selected as an actual coding scheme. A predictive pixel block that selects predictive data can be referred to as a selection of a predictive pixel block in this specification.
[00159] Residue generating unit 102 can generate a residual CU pixel block based on the CU pixel block and the predictive pixel block selected for CU PU. For example, the unit
Petition 870180168063, of 12/27/2018, p. 212/306
62/112 of residue generation 102 can generate the CU residual pixel block, so that each sample of the residual pixel block has a value equal to a difference between the following: a sample of a pixel block for CU and a corresponding sample in a predictive pixel block selected for CU's PU.
[00160] The prediction processing unit 100 can perform a quadtree partition to partition the residual CU pixel block into sub-blocks. Each residual pixel block that is no longer partitioned can be associated with a different CU for the CU. There is no necessary connection between a size and location of the residual pixel block associated with the CU for the CU and a size and location of the pixel block associated with the CU for the CU.
[00161] A pixel in a residual pixel block for a TU can be corresponding to a luminance sample and two chrominance samples; therefore, each TU can be associated with a luminance sampling block and two chrominance sampling blocks. The transform processing unit 104 can perform one or more transformations on a residual sampling block associated with the TU, in order to generate a block of coefficients for each TU for CU. For example, transform processing unit 104 can perform a discrete cosine transform (DCT), directional transform, or transform that has a similar concept on the residual sampling block.
[00162] The quantization unit 106 can quantize a coefficient in the coefficient blocks. For example, a coefficient of n digits can be truncated to a coefficient of m digits during quantization, where n is greater than m. The quantization unit 106 can quantize, based on the quantization parameter value (QP) associated with the CU, a block of coefficients associated with a TU for the CU. The video encoder 20 can adjust the QP value
Petition 870180168063, of 12/27/2018, p. 213/306
63/112 associated with CU, to adjust a degree of quantization performed on the block of coefficients associated with CU.
[00163] The inverse quantization unit 108 and the inverse transform processing unit 110 can respectively perform inverse quantization and inverse transform on a transformed coefficient block to reconstruct a residual sampling block from the coefficient block. Reconstruction unit 112 may add a sample from a reconstructed residual sampling block to a corresponding sample of one or more predictive sampling blocks generated by the prediction processing unit 100, to generate a reconstructed sampling block associated with the TU. By reconstructing a sampling block from each TU for the CU in this mode, the video encoder 20 can reconstruct a pixel block from the CU.
[00164] Filter unit 113 can perform an unlock filtering operation to reduce a blocking effect of a block of pixels associated with a CU. In addition, the filter unit can perform an operation on the reconstructed sampling block using a SAO determined by the prediction processing unit 100, to restore a pixel block. The filter unit can generate encoding information from a CTB SAO syntax element.
[00165] Temporary storage of decoded image 114 can store the reconstructed pixel block. Interpretation unit 121 may use a reference image that includes the reconstructed pixel block to perform interpretation on a PU of another image. In addition, the intraprediction processing unit 126 can use the reconstructed pixel block in the decoded image buffer 114 to perform an intraprediction over another PU in the same image as the CU.
Petition 870180168063, of 12/27/2018, p. 214/306
64/112 [00166] Entropy coding unit 116 can receive data from another function component of video encoder 20. For example, entropy coding unit 116 can receive a block of coefficients from quantization unit 106 and can receiving a syntax element from the prediction processing unit 100. The entropy coding unit 116 can perform one or more entropy coding operations on the data to generate entropy encoded data. For example, the entropy encoding unit 116 can perform a context-based adaptive variable length encoding operation (CAVLC), a CABAC operation, a variable length variable to variable encoding operation (V2V), an arithmetic encoding operation syntax-based context-based adaptive binary (SBAC), a probability interval partitioning (PIPE) entropy coding operation, or other type of entropy coding operation. In a specific example, the entropy encoding unit 116 can use the regular CABAC machine 118 to encode a regular CABAC encoded / decoded bit of a syntax element, and can use a deviation encoding / decoding machine 120 for encode a deviation encoded / decoded binary bit.
[00167] Figure 8 is a block diagram of an example of a video decoder 30 configured to implement a technology of the present invention. It should be understood that Figure 8 is an example and should not be considered as a limitation on a technology that is widely exemplified and described by the present invention. As shown in Figure 8, video decoder 30 includes an entropy decoding unit 150, a prediction processing unit 152, an inverse quantization unit 154, an inverse transformation processing unit 156,
Petition 870180168063, of 12/27/2018, p. 215/306
65/112 a reconstruction unit 158, a filter unit 159, and a decoded image buffer 160. The prediction processing unit 152 includes a motion compensation unit 162 and an intraprediction processing unit 164. The unit entropy decoding machine 150 includes a regular CABAC encoding / decoding machine 166 and a bypass encoding / decoding machine 168. In another example, video decoder 30 may include more or less or different function components.
[00168] The video decoder 30 can receive a data stream. The entropy decoding unit 150 can analyze the data stream to extract a syntax element from the data stream. During data stream decoding, the entropy decoding unit 150 can analyze a syntax element that is in the data stream and on which the entropy coding is performed. Prediction processing unit 152, inverse quantization unit 154, inverse transform processing unit 156, reconstruction unit 158 and filter unit 159 can decode video data based on the syntax element extracted from the stream data, to generate decoded video data.
[00169] The syntax element can include a regular CABAC encoded / decoded binary bit and an offset encoded / decoded binary bit. The entropy decoding unit 150 can use the regular CABAC encoding / decoding machine 166 to decode the regular CABAC encoded / decoding bit, and can use the offset deviation encoding / decoding machine 168 to decode the encoded binary bit / deviation decoded.
[00170] If an intraframe predictive encoding is performed on a PU, the intrapredicture 164 processing unit can
Petition 870180168063, of 12/27/2018, p. 216/306
66/112 perform intraprediction to generate a predictive PU sampling block. The intraprediction processing unit 164 can generate a predictive pixel block for the PU in an intrapredictive mode based on pixel blocks for PUs with adjacent space. The intraprediction processing unit 164 can determine the intraprediction mode for the PU based on one or more elements of syntax obtained by analyzing the data flow.
[00171] The motion compensation unit 162 can build a first list of reference images (list 0) and a second list of reference images (list 1) based on the syntax element obtained by analyzing the data flow. In addition, if an interframe predictive encoding is performed on a PU, the entropy decoding unit 150 can analyze the PU's motion information. The motion compensation unit 162 can determine one or more reference blocks for the PU based on the movement information of the PU. The motion compensation unit 162 can generate a predictive pixel block for the PU based on one or more reference blocks for the PU.
[00172] In addition, the video decoder 30 can perform a reconstruction operation on a CU that is no longer partitioned. To perform the reconstruction operation on the CU that is no longer partitioned, the video decoder 30 can perform a reconstruction operation on each TU for the CU. By performing the reconstruction operation on each TU for the CU, the video decoder 30 can reconstruct a residual pixel block associated with the CU.
[00173] During a reconstruction operation on a TU for the CU, the inverse quantization unit 154 can inversely quantize (i.e., de-quantize) a block of coefficients associated with the TU. The inverse quantization unit 154 can determine a degree of quantization using a QP value associated with a CU that
Petition 870180168063, of 12/27/2018, p. 217/306
67/112 corresponds to the TU, and the degree of quantification determined is the same as a degree of inverse quantization to be used by the inverse quantization unit 154.
[00174] After the inverse quantization unit 154 inversely quantizes the coefficient block, the inverse transform processing unit 156 can perform one or more inverse transformations on the coefficient block, in order to generate a residual sampling block associated with the YOU. For example, the inverse transform processing unit 156 can perform an inverse DCT, inverse integer transform, KarhunenLoeve transform (Karhunen-Loeve) (KLT), inverse rotation transform, inverse directional transform, or other inverse transform that corresponds to the transformed from the encoder side over the coefficient block.
[00175] During application, the reconstruction unit 158 can use a residual pixel block associated with the TU for the CU and the predictive pixel block (to be specific, intra-prediction data or interpredition data) for the CU of the CU to reconstruct a block of CU pixels. Specifically, the reconstruction unit 158 can add a sample of a residual pixel block to a corresponding sample of a predictive pixel block to reconstruct the CU pixel block.
[00176] Filter unit 159 can perform an unlock filtering operation to reduce a blocking effect of a block of pixels associated with a CU of a CTB. In addition, filter unit 159 can modify a CTB pixel value based on an SAO syntax element obtained by analyzing the data stream. For example, filter unit 159 can determine a corrected value based on the CTB's SAO syntax element, and add the determined corrected value to a sample value in a pixel block
Petition 870180168063, of 12/27/2018, p. 218/306
68/112 reconstructed from the CTB. By modifying some or all of the pixel values of an image's CTB, filter unit 159 can correct a reconstructed image of video data based on the SAO syntax element.
[00177] The video decoder 30 can store a block of pixels for a CU in the decoded image temporary storage 160. The decoded image temporary storage 160 can provide a reference image, in order to perform a subsequent motion compensation, intraprediction , and presentation of a display device (for example, display device 32 in Figure 3). For example, the video decoder 30 may perform an intrapredictive operation or an interpretive operation on a PU of another CU based on a block of pixels in the decoded image buffer 160.
[00178] The present invention is an improved method proposed for partitioning an image block (CTB / CTU) when the prediction processing unit 100 is performing a prediction process.
[00179] The above content described that the prediction processing unit 100 of the encoder 20 can partition a block of images using a quadtree partition pattern plus binary tree. However, the following problem exists in a process of partitioning a block of images using a quadtree plus a binary tree: if a block of images with a size of 2Nx2N is partitioned into four blocks of NxN sub-images, the partition can be implemented using the following methods in Figure 9 and Figure 10.
[00180] Method 1: Referring to Figure 9, the block of 2Nx2N images is directly partitioned into four blocks of NxN subimages through quadtree partition, and this method can be marked as QT.
Petition 870180168063, of 12/27/2018, p. 219/306
69/112 [00181] Method 2: Referring to Figure 10 (a), a recursive binary tree partition pattern is used, where the horizontal binary tree partition is first performed on the 2Nx2N image block to obtain two blocks of sub-images 2NxN, and then a vertical binary tree partition is performed separately on the two blocks of 2NxN sub-images, and this method can be marked as HBQT.
[00182] Method 3: Referring to Figure 10 (b), a recursive binary tree partition pattern is used, where the vertical binary tree partition is first performed on the 2Nx2N image block to obtain two blocks of 2NxN sub-images, and then a horizontal binary tree partition is performed separately on the two blocks of 2NxN subimages, and this method can be marked as VBQT.
[00183] For a quadtree partition structure plus binary tree, the partition flexibility of an encoded block is improved, but a block of sub-images can be obtained not only through quadtree partition, but also through binary tree partition. There is a redundancy between the quadtree partition and the binary tree partition, this redundancy causes an increase in complexity of an encoder side and an increase in partition identifiers, and correspondingly, a decoder side is dragged. To be specific, the complexity and a delay on the decoder side are increased.
[00184] In addition, it can be discovered through research and experimental analysis that although the partition results obtained using the three different partition methods are the same, this does not mean that the coding results (compression efficiency) are the same. This is because the partition identifiers used in the three methods are different. In addition, the processing orders for four sub-blocks are different. As shown in Figure 11, the left is a QT processing order,
Petition 870180168063, of 12/27/2018, p. 220/306
70/112 the medium is an HBQT processing order, and the right is a VBQT processing order. It can be discovered from this figure that an order of processing of NxN blocks of sub-images obtained using the partition pattern QT can be different from at least one order of processing of NxN blocks of sub-images obtained using the partition pattern VBQT. A difference in processing orders leads to a difference between reference pixels available in an encoding process, specifically for a slice I. As shown in Figure 12, different processing orders lead to the use of different reference pixels. The difference between the available reference pixels can lead to a difference in coding efficiency. Therefore, to reduce the redundancy generated in the encoding process due to the quadtree partition pattern plus binary tree, the present invention provides an improved idea, to be specific, a quadtree partition method plus limited binary tree. If NxN sub-images are desired, when a quadtree partition pattern is available and a binary tree partition pattern is also available, NxN sub-images are obtained by limiting the use of HBQT; or when a quadtree partition pattern is unavailable, HBQT is allowed to obtain the NxN sub-images. A limitation on VBQT can be adjusted in a preset mode, to be specific, it can be adjusted to always be limited or adjusted to always be unlimited. Through research and experimental comparison, it is recommended that VBT not be limited within slice I, and VBT should be limited within slices P and B.
[00185] Based on the above idea, an embodiment of the present invention provides an encoding method. The encoding method can be implemented using encoder 20. It should be noted that, in the following encoding method, only a portion
Petition 870180168063, of 12/27/2018, p. 221/306
71/112 improved encoding method by the described encoder 20 is described, and the encoding method used by the encoder 20 is also applicable to a part that is not described. The encoding method provided in this implementation of the present invention is shown in Figure 13 and includes the following step:
[00186] S132. When partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint subimage processing mode, where the first 2NxN subimage block and the second 2NxN subimage block or the first Nx2N subimage block and the second Nx2N subimage block are obtained by partitioning the image block with the size of 2Nx2N.
[00187] In an intraprediction process, both quadtree partitioning and binary tree partitioning are allowed for the image with the size of 2Nx2N. To reduce redundancy generated in an encoding process due to a quadtree partition pattern plus binary tree, the first block of sub-images with the size of 2NxN and the second block of sub-images with the size of 2NxN or the first block of sub-images with the size of Nx2N and the second block of sub-images with the size of Nx2N, or both the first block of sub-images 2NxN and the second block of sub-images 2NxN, and the first block of sub-images Nx2N and the second block of sub-images Nx2N are processed in one restriction image processing mode. A specific processing mode, that is, the constraint subimage processing mode, includes the following steps:
[00188] S134. Determine whether the first block of subimages needs
Petition 870180168063, of 12/27/2018, p. 222/306
72/112 be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block sub-images and the first partitioned sub-image block.
[00189] S136. Determine whether the second block of sub-images needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partitioning the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00190] That an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the image block 2Nx2N
Petition 870180168063, of 12/27/2018, p. 223/306
73/112 being partitioned using the quadtree partition pattern can also be understood as that a size of a block of sub-images obtained after at least one of the first block of sub-images or the second block of sub-images is partitioned is non-NxN. That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[00191] In the present invention, a constraint processing is performed on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first Nx2N subimage and the second block of Nx2N subimages in the constraint subimage processing mode, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree. In addition, it can also be proved based on the following experimental data that for a slice I, when a quadtree partition pattern is available, the encoding complexity (Enc T) can be decreased by 3% in the subimage processing mode of restriction, specifically, when HBT (horizontal binary tree partition) is limited, and furthermore, no impact is exerted on the coding performance.
Seq Y u V Enc T 416x240 0.0% 0.0% 0.0% 96% 832x480 0.0% 0.0% 0.0% 97% 1080x720 0.0% 0.0% 0.0% 97% Class B 0.0% 0.0% 0.0% 97% 2560x1600 0.0% 0.0% 0.0% 97 Total 0.0% 0.0% 0.0% 97%
[00192] In the encoding method 130, in the restriction subimage processing mode, the partition pattern of the first block of
Petition 870180168063, of 12/27/2018, p. 224/306
74/112 sub-images is from a first set of partition patterns, and the partition pattern from the second block of sub-images is from a second set of partition patterns. The first set of partition patterns includes at least one partition pattern that is different from all partition patterns in the second set of partition patterns. For example, the first set of partition patterns can include vertical partition and horizontal partition, and the second set of partition patterns includes only horizontal partition or only vertical partition, that is, the second set of partition patterns is a subset of the first set of partition patterns. Specifically, a first set of partition patterns for the first block of sub-images with the size of 2NxN includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the horizontal partition pattern; and a first set of partition patterns for the first block of subimages with the size of Nx2N includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the vertical partition pattern. This limitation mode can be used to avoid using, in a process of processing the first block of sub-images and the second block of sub-images, a partition pattern of partitioning the block of images 2Nx2N into four blocks of sub-images with a size of NxN, thereby reducing redundancy. In addition, in a process of performing decoding processing on a second subimage, read code words may be reduced because the number of partition methods used for the second block of subimages is limited. In addition, if a vertical partition is performed on the first block of 2NxN subimages, when only horizontal partition is allowed for the second block of 2NxN subimages, in the coding method, only execute
Petition 870180168063, of 12/27/2018, p. 225/306
75/112 coding to determine whether the second block of 2NxN sub-images is additionally partitioned can be allowed, without the need to perform coding to determine a specific partition pattern of the second block of 2NxN sub-images; and if the second block of 2NxN sub-images needs to be further partitioned, the partition pattern of the second block of 2NxN sub-images is horizontal partition by default. In this mode, the coding words for coding can be further reduced. Correspondingly, if a horizontal partition is performed on the first block of Nx2N sub-images, when only vertical partition is allowed for the second block of Nx2N sub-images, in an encoding method, only perform encoding to determine whether the second block of Nx2N sub-images is additionally partitioned can be allowed, with no need to perform coding to determine a specific partition pattern of the second block of Nx2N subimages; and if the second block of Nx2N sub-images needs to be further partitioned, a partition pattern of the second block of Nx2N sub-images is vertical partition by default. In this mode, the code words required for coding can be further reduced.
[00193] Optionally, during the processing of the first block of sub-images and the second block of sub-images in the constraint sub-image processing mode, that is, during additional binary tree partition, the first block of sub-images is processed before the second block of subimages be processed. Therefore, a partition pattern of the second block of sub-images may depend on a partition pattern of the first block of sub-images, that is, the partition pattern of the second block of sub-images depends on / is restricted by the partition pattern of the first block of sub-images. Details can be as follows: When the first block of
Petition 870180168063, of 12/27/2018, p. 226/306
76/112 sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or partition horizontal; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition.
[00194] Optionally, when a quadtree partition is allowed, the restriction subimage processing mode is available only for a subimage block obtained using a specific partition pattern, to be specific it is only used to process the obtained subimage block using the specific partition pattern. For example, when a block of images with a size of 2Nx2N is partitioned into a first block of Nx2N sub-images and a second block of imagχ2Ν sub-images, the restriction sub-image processing mode can be used. Specifically, in this partition pattern, the restriction subimage processing mode can be: It is limited that a horizontal partition method cannot be used for the second block of subimages, that is, HBT is not used; or a partition pattern of the second block of sub-images is determined based on a pattern
Petition 870180168063, of 12/27/2018, p. 227/306
77/112 partition of the first block of sub-images, that is, when a horizontal partition pattern is used for the first block of sub-images, the horizontal partition pattern cannot be used for the second sub-image, or when a vertical partition pattern is used for the first block of sub-images, a horizontal partition pattern or the vertical partition pattern can be used for the second sub-image. In addition, when a block of images with a size of 2Nx2N is partitioned into a first block of 2NxN sub-images and a second block of 2NxN sub-images, the same partition pattern or different partition patterns can be used for the first block of 2NxN sub-images and the second block of 2NxN sub-images, and the first block of 2NxN sub-images and the second block of 2NxN sub-images are not processed in the constraint sub-image processing mode. In this mode, the flexibility of a processing process can be improved.
[00195] Optionally, the encoding method 130 can also include: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is not allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a non-restriction sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of Nx2N subimages are obtained by partitioning the image block with the size of 2Νχ2Ν. The non-constraint sub-image processing mode includes: determining whether the first block of sub-images needs to be further partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block
Petition 870180168063, of 12/27/2018, p. 228/306
78/112 sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block of sub-images and the first partitioned subimage block, where the partition pattern of the first subimage block is that of a first set of partition patterns. The non-constraint sub-image processing mode also includes: determining whether a second block of sub-images needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partition the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the partition pattern of the second block of sub-images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is a second set of partition patterns, and all of the partition patterns in the first set of partition patterns are the same as all patterns in the second set of partition patterns.
[00196] In this processing mode, the following can be ensured: When the quadtree partition pattern cannot be used, for example, according to an existing rule, when a quadtree leaf node is partitioned using a binary tree, leaf nodes obtained through binary tree partition cannot be partitioned using a quadtree, and using a non-constraint subimage processing mode to obtain subimage blocks with an NxN size is allowed. This can ensure
Petition 870180168063, of 12/27/2018, p. 229/306
79/112 that a gain brought in a quadtree partition pattern can be fully used for an image.
[00197] Preferably, the restriction subimage processing mode is used to encode a slice I (slice), or can be used to encode a slice P or a slice B.
[00198] Corresponding to the encoding method 13, an embodiment of the present invention further provides a decoding method. The decoding method can be implemented by the decoder 30. It should be noted that, in the following decoding method, only an improved part of the decoding method used by the decoder 30 described is described, and the decoding method used by the decoder 30 is also applicable to a part that is not described. As shown in Figure 14, decoding method 140 includes the following step:
[00199] S142. Analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained by partitioning the block of images with the size of 2Nx2N.
[00200] In an intraprediction process, both quadtree partition and binary tree partition are allowed for the image with the size of 2Nx2N. To reduce the redundancy generated in an encoding process due to the quadtree partition pattern plus binary tree, the first block of sub-images with the size of 2NxN and the second block of sub-images with the size of 2NxN or the
Petition 870180168063, of 12/27/2018, p. 230/306
80/112 first block of sub-images with the size of Nx2N and the second block of sub-images with size of Nx2N, or both the first block of sub-images 2NxN and the second block of sub-images 2NxN, and the first block of sub-images Nx2N and the second block of Nx2N subimages are processed in a constraint image processing mode. A specific processing mode, that is, the constraint subimage processing mode, includes the following steps:
[00201] S144. Determine whether the first block of sub-images needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images.
[00202] S146. Determine whether the second block of sub-images needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second block of sub-images partitioned and the first block of sub-images partitioned is different from a standard
Petition 870180168063, of 12/27/2018, p. 231/306
81/112 image block partition (default) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern. [00203] That an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the image block 2Nx2N being partitioned using the quadtree partition pattern can also be understood as that a size of a block of sub-images obtained after at least one of the first block of sub-images or the second block of sub-images is partitioned is non-NxN. That is, a constraint on a sub-image partition pattern causes a difference between a size of a sub-image block obtained through binary tree partition and a size of a sub-image block obtained through quadtree partition, thereby eliminating a redundancy.
[00204] In the present invention, a constraint processing is performed on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first Nx2N subimage and the second block of Nx2N subimages in the constraint subimage processing mode, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[00205] In decoding method 140, in constraint sub-image processing mode, the partition pattern of the first block of sub-images is a first set of partition patterns, and the partition pattern of the second block of sub-images is one second set of partition patterns. The first set of partition patterns includes at least one partition pattern that is different from all partition patterns in the second set of partition patterns. For example, the first set of partition patterns can include vertical and horizontal partitions, and the second set of
Petition 870180168063, of 12/27/2018, p. 232/306
82/112 partition patterns include only horizontal partition or only vertical partition, that is, the second set of partition patterns is a subset of the first set of partition patterns. Specifically, a first set of partition patterns for the first block of sub-images with the size of 2NxN includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the horizontal partition pattern; and a first set of partition patterns for the first block of subimages with the size of Nx2N includes a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns includes the vertical partition pattern. This limitation mode can be used to avoid using, in a process of processing the first block of sub-images and the second block of sub-images, a partition pattern of partitioning the block of images 2Nx2N into four blocks of sub-images with a size of NxN, thereby reducing redundancy. In addition, in a process of performing decoding processing on a second subimage, read code words may be reduced because the number of partition methods used for the second block of subimages is limited. In addition, if a vertical partition is performed on the first block of 2NxN sub-images, when only horizontal partition is allowed for the second block of 2NxN sub-images, in the decoding method, only perform decoding to determine whether the second block of 2NxN sub-images is additionally partitioned can be allowed, with no need to perform decoding to determine a specific partition pattern of the second block of 2NxN subimages; and if the second block of 2NxN sub-images needs to be further partitioned, the partition pattern of the second block of 2NxN sub-images is horizontal partition by default. In this mode, data flows
Petition 870180168063, of 12/27/2018, p. 233/306
83/112 that need to be read can be further reduced. Correspondingly, if a horizontal partition is performed on the first block of Nx2N sub-images, when only vertical partition is allowed for the second block of Nx2N sub-images, in the decoding method, only determining whether a second block of Nx2N sub-images is additionally partitioned may be allowed , with no need to perform decoding to determine a specific partition pattern of the second block of Nx2N subimages; and if the second block of Nx2N sub-images needs to be further partitioned, the partition pattern of the second block of Nx2N sub-images is vertical partition by default. In this mode, data streams that need to be read can be further reduced.
[00206] Optionally, during the processing of the first block of sub-images and the second block of sub-images in the constraint sub-image processing mode, that is, during an additional binary tree partition, the first block of sub-images is processed before the second block sub-images to be processed. Therefore, a partition pattern of the second block of sub-images may depend on a partition pattern of the first block of sub-images, that is, the partition pattern of the second block of sub-images is determined by the partition pattern of the first block of sub-images. Details can be as follows: When the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or partition horizontal; or when the
Petition 870180168063, of 12/27/2018, p. 234/306
84/112 first block of sub-images and second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition.
[00207] Optionally, when a quadtree partition is allowed, the restriction subimage processing mode is only available for a subimage block obtained using a specific partition pattern, to be specific it is only used to process the obtained subimage block using the specific partition pattern. For example, when a block of images with a size of 2Nx2N is partitioned into a first block of Nx2N sub-images and a second block of imagχ2Ν sub-images, the restriction sub-image processing mode can be used. Specifically, in this partition pattern, the restriction subimage processing mode can be: It is limited that a horizontal partition method cannot be used for the second block of subimages, that is, HBT is not used; or a partition pattern of the second block of sub-images is determined based on a partition pattern of the first block of sub-images, that is, when a horizontal partition pattern is used for the first block of sub-images, the horizontal partition pattern cannot be used for the second block of sub-images, or when a vertical partition pattern is used for the first block of sub-images, a horizontal partition pattern or the vertical partition pattern can be used for the second sub-image. In addition, when a block of images with a size of 2Nx2N is partitioned into a first block of
Petition 870180168063, of 12/27/2018, p. 235/306
85/112 2NxN sub-images and a second block of 2NxN sub-images, the same partition pattern or different partition patterns can be used for the first block of sub-images and the second block of sub-images, and the first block of sub-images and the second block of subimages are not processed in constraint subimage processing mode. In this mode, the flexibility of a processing process can be improved.
[00208] Optionally, decoding method 140 can also include: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is not allowed, process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a non-restriction sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of Nx2N subimages are obtained by partitioning the image block with the size of 2Νχ2Ν. The non-constraint sub-image processing mode includes: determining whether the first block of sub-images needs to be further partitioned; and when the first block of sub-images does not need to be additionally partitioned, decode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images, where the partition pattern of the first block of sub-images is that of a first set of partition patterns. The non-constraint sub-image processing mode also includes: determining whether the second block of sub-images needs to be further partitioned; and when
Petition 870180168063, of 12/27/2018, p. 236/306
86/112 the second block of sub-images need not be additionally partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is that of a second set of partition patterns, and all of the partition patterns in the first set of partition patterns are the same as all of the partition patterns in the second set of partition patterns.
[00209] In this processing mode, the following can be ensured: When the quadtree partition pattern cannot be used, for example, according to an existing rule, when a quadtree leaf node is partitioned using a binary tree, leaf nodes obtained through binary tree partition cannot be partitioned using a quadtree, and using a non-constraint subimage processing mode to obtain subimage blocks with an NxN size is allowed. This can ensure that a gain brought about in a quadtree partition pattern can be fully utilized for an image.
[00210] Preferably, the restriction subimage processing mode is used to decode an I slice (slice), or it can be used to decode a P slice or a B slice.
[00211] Based on the above idea, one embodiment of the present invention provides another encoding method 150. As shown in Figure 15, encoding method 150 includes the following step: [00212] S152. When partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN subimages and a
Petition 870180168063, of 12/27/2018, p. 237/306
87/112 second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the image block with the size of 2Nx2N, and the restriction sub-image processing mode includes the following steps:
[00213] S154. Determine a partition pattern of the first subimage, encode the partition pattern of the first block of images, and encode the first block of subimages based on the partition pattern of the first block of subimages.
[00214] S156. Determine a partition pattern of the second block of sub-images, encode the partition pattern of the second block of images, and encode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of images sub-images is restricted by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from a partition pattern of image block (standard) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00215] In the encoding method, the block of sub-images with the size of Nx2N or the block of sub-images with the size of 2NxN is encoded in the restriction sub-image processing mode, thereby reducing the redundancy that exists when an image is partitioned using a quadtree plus a binary tree. [00216] The coding method provided in this implementation has all the beneficial effects of the coding method 130, and can
Petition 870180168063, of 12/27/2018, p. 238/306
88/112 requires less data flow. In addition, unless otherwise specified, encoding method 150 is applicable to all extended modes above encoding method 130. For brevity, details are not repeated here. All applicable limitations of encoding method 130, to be specific, extended modes, are referred to here as limitation and extension of encoding method 150. [00217] A difference between encoding method 150 and encoding method 130 is found in whereas in the encoding method 150, for the first block of 2NxN sub-images and the second block of sub-images 2ΝχΝ, the first set of partition patterns does not include any partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not include any partitions and the horizontal partition pattern; and for the first block of Nx2N subimages and the second block of subimages Νχ2Ν, the first set of partition patterns does not include any partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not includes no partitions and the vertical partition pattern.
[00218] Furthermore, the difference between the encoding method 150 and the encoding method 130 is that in the encoding method 150, the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images includes: when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without partition or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without
Petition 870180168063, of 12/27/2018, p. 239/306
89/112 partition, vertical partition, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[00219] Corresponding to the encoding method 15, one embodiment of the present invention still provides a decoding method 160. The decoding method 160 is shown in Figure 16. The difference between decoding method 160 and decoding method 140 is found where in the decoding method 160, a stream of encoded data is directly decoded according to an indication of the block of images in a data block, without the need to predetermine, based on the data flow, whether a block of sub-images with a size of Nx2N or a block of images of 2NxN needs to be additionally partitioned, and then perform decoding. In this mode, the determination logic can be reduced, thereby reducing the complexity of decoding. Specifically, decoding method 160 specifically includes the following step:
[00220] S162. Analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a subimage processing mode of
Petition 870180168063, of 12/27/2018, p. 240/306
90/112 restriction, where the first block of 2NxN subimages and the second block of 2NxN subimages or the first block of Nx2N subimages and the second block of Nx2N subimages are obtained by partitioning the block of images with the size of 2Nx2N, and the mode of Constraint subimage processing includes the following steps:
[00221] S164. Analyze the data stream to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the pattern partition of the first block of sub-images.
[00222] S166. Analyze the data flow to determine a partition identifier for the second block of sub-images, determine a partition pattern for the second block of sub-images based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the pattern partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second block of images partitioned sub-images and the first partitioned sub-image block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00223] The decoding method provided in this implementation has all the beneficial effects of decoding method 140, and may require less data flow. In addition, unless otherwise specified, decoding method 160 is applicable to all extended modes above decoding method 140. For brevity, details are not repeated here. All limitations
Petition 870180168063, of 12/27/2018, p. 241/306
91/112 of decoding method 140, to be specific, extended modes, are referred to herein as limitation and extension of decoding method 160.
[00224] A difference between decoding method 160 and decoding method 140 is that in decoding method 160, for the first block of sub-images 2NxN and the second block of sub-images 2ΝχΝ, the first set of partition patterns does not includes no partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not include any partitions and the horizontal partition pattern; and for the first block of Nx2N subimages and the second block of subimages Νχ2Ν, the first set of partition patterns does not include any partitions, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not includes no partitions and the vertical partition pattern.
[00225] Furthermore, the difference between decoding method 160 and decoding method 140 is that in decoding method 160, the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images includes: when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without partition or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without partition, partition vertical, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of
Petition 870180168063, of 12/27/2018, p. 242/306
92/112 subimages is horizontal partition, the partition pattern of the second block of subimages is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[00226] Referring to Figure 17, the present invention further provides an encoding apparatus 170 configured to implement the encoding method 130. The encoding apparatus 170 has the same architecture as the encoder 20 described above in the present invention. One difference is that a partition pattern used by the encoding device 170 to partition a block of images during intraprediction is different from that used by the encoder 20, but the encoding device 170 for implementing all other encoding processing processes in one same as encoder 20. Specifically, encoding apparatus 170 includes: [00227] a restriction encoding determination module 172, configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern it is permitted to process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a constraint subimage processing mode, where the first block of 2NxN subimages and the second block of 2NxN subimages or the first block of Nx2N subimages and the second block of Nx2N sub-images are obtained by partitioning the image block with the size of 2Nx2N; and [00228] a restriction encoding module 174 that is configured to implement the processing mode of
Petition 870180168063, of 12/27/2018, p. 243/306
93/112 restriction subimage that includes:
[00229] a first 1742 subimage processing module, configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block subimage and the first partitioned subimage block; and [00230] a second 1744 subimage processing module, configured to: determine whether the second subimage block needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partitioning the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of images and the second block of sub-images partitioned, where the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second partitioned subimage block and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
Petition 870180168063, of 12/27/2018, p. 244/306
94/112 [00231] In the present invention, the coding apparatus 170 performs a constraint processing on the first block of sub-images 2NxN and the second block of sub-images 2NxN and / or the first block of sub-images Nx2N and the second block of sub-images Nx2N in the constraint subimage processing mode, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[00232] Referring to Figure 18, the present invention further provides a decoding apparatus 180 configured to implement the decoding method 140. The decoding apparatus 180 has the same architecture as the decoder 30 described above in the present invention, and a difference if finds that a partition pattern used by the decoder 180 to partition an image block during intraprediction is different from that used by decoder 30, but the decoder 180 can implement all other encoding processing processes in the same way as the decoder 30. Specifically, the decoder 180 includes:
[00233] a restriction decoding determination module 182, configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of sub-images 2NxN and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and [00234] a restriction decoding module 184 that is
Petition 870180168063, of 12/27/2018, p. 245/306
95/112 configured to implement the constraint subimage processing mode and which includes:
[00235] a first 1842 subimage processing module, configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images; and [00236] a second 1844 subimage processing module, configured to: determine whether the second subimage block needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that an image block partition pattern (pattern) obtained for the second block of sub-images partitioned and the first block of sub-images partitioned is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00237] The decoding apparatus 180 provided in this implementation of the present invention performs a processing of
Petition 870180168063, of 12/27/2018, p. 246/306
96/112 restriction on the first block of 2NxN sub-images and the second block of 2NxN sub-images and / or the first Nx2N sub-image and the second block of Nx2N sub-images in the restriction sub-image processing mode, thereby reducing a redundancy that exists in a quadtree partition process plus binary tree.
[00238] Optionally, the restriction decoding determination module 182 is still configured for: when partitioning the 2Nx2N image block using a quadtree partition pattern it is not allowed, to process encoded data streams from the first subimage block and the second block of sub-images in a non-constraint sub-image processing mode; and correspondingly, the decoding apparatus 180 further includes: a non-restriction decoding module 186 which is configured to implement the non-restriction subimage processing mode and which includes:
[00239] a third sub-image processing module 1862, configured to: analyze the data flow to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and [00240] a fourth 1864 subimage processing module, configured to: analyze the data flow to determine a partition identifier of the second block of subimages, determine a partition pattern of the second block of subimages based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images
Petition 870180168063, of 12/27/2018, p. 247/306
97/112 sub-images are selected from the same set of partition patterns.
[00241] Referring to Figure 19, the present invention further provides an encoding apparatus 190 configured to implement the encoding method 150. The encoding apparatus 190 has the same architecture as the encoder 20 described above in the present invention. One difference is that a partition pattern used by the encoding device 190 to partition a block of images during intraprediction is different from that used by the encoder 20, but the encoding device 190 can implement all other encoding processing processes in one same as encoder 20. Specifically, encoding apparatus 190 includes: [00242] a restriction encoding determination module 192, configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern it is permitted to process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a constraint subimage processing mode, where the first block of 2NxN subimages and the second block of 2NxN subimages or the first block of Nx2N subimages and the second block of Nx2N sub-images are obtained by partitioning the image block with the size of 2Nx2N; and [00243] a constraint encoding module 194 that is configured to implement the constraint subimage processing mode and that includes:
[00244] a first sub-image processing module 1942, configured to: determine a partition pattern of the first block of sub-images, encode the partition pattern of the first block of images, and encode the first block of sub-images based on
Petition 870180168063, of 12/27/2018, p. 248/306
98/112 partition pattern of the first block of subimages; and [00245] a second 1944 sub-image processing module, configured to: determine a partition pattern of the second block of sub-images, encode the partition pattern of the second block of images, and encode the second block of sub-images based on the pattern of partition of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a pattern block partition pattern (pattern) obtained for the second block of sub-images partitioned and the first partitioned subimage block is different from an image block partition pattern (pattern) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00246] In the present invention, the coding apparatus 190 performs a constraint processing on the first block of 2NxN subimages and the second block of 2NxN subimages and / or the first Nx2N subimage and the second block of Nx2N subimages in image processing mode. constraint subimage, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[00247] Referring to Figure 20, the present invention further provides a decoding apparatus 210 configured to implement the decoding method 160. The decoding apparatus 210 has the same architecture as the decoder 30 described above in the present invention, and the difference is finds that a partition pattern used by the decoding apparatus 210 to partition a block of images during intraprediction is different from that used by decoder 30, but the decoding apparatus 210 can implement all other encoding processing processes in the same way as the decoder 30. Specifically, the
Petition 870180168063, of 12/27/2018, p. 249/306
99/112 decoding 210 includes:
[00248] a restriction decoding determination module 212, configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of sub-images 2NxN and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and [00249] a constraint decoding module 214 that is configured to implement the constraint subimage processing mode and that includes:
[00250] a first sub-image processing module 2142, configured to: analyze the data flow to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and [00251] a second sub-image processing module 2144, configured to: analyze the data flow to determine a partition identifier of the second block of sub-images, determine a partition pattern of the second block of sub-images based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first
Petition 870180168063, of 12/27/2018, p. 250/306
100/112 block of sub-images, so that an image block partition pattern (pattern) obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern (standard) obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[00252] The decoding apparatus 210 provided in this implementation of the present invention performs a constraint processing on the first block of sub-images 2NxN and the second block of sub-images 2NxN and / or the first sub-image Nx2N and the second block of sub-images Nx2N in mode constraint subimage processing, thereby reducing the redundancy that exists in a quadtree partition process plus binary tree.
[00253] Optionally, the restriction decoding determination module 212 is still configured for: when partitioning the 2Nx2N image block using a quadtree partition pattern it is not allowed, to process encoded data streams from the first block of sub-images and the second block of sub-images in a non-constraint sub-image processing mode; and correspondingly, the decoding apparatus 210 further includes: a non-restriction decoding module 216 which is configured to implement the non-restriction subimage processing mode and which includes:
[00254] a third sub-image processing module 2162, configured to: analyze the data flow to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and
Petition 870180168063, of 12/27/2018, p. 251/306
101/112 [00255] a fourth subimage processing module 2164, configured to: analyze the data flow to determine a partition identifier of the second block of sub-images, determine a partition pattern of the second block of sub-images based on the identifier of partition of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images are selected from one same set of partition patterns.
[00256] The coding apparatus and the decoding apparatus in the above implementations of the present invention can be applied to various electronic devices. For example, the following provides an example in which the modalities of the present invention are applied to a television device and a mobile phone.
[00257] Figure 21 is a schematic structural diagram of an applicable television application in accordance with an embodiment of the present invention. A television device 900 includes an antenna 901, a tuner 902, a multi-channel demultiplexer 903, a decoder 904, a video signal processor 905, a display unit 906, an audio signal processor 907, a speaker 908 , an external interface 909, a controller 910, a user interface 911, and a bus 912.
[00258] Tuner 902 extracts an expected frequency channel signal from a transmission signal received through antenna 901, and demodulates the extracted signal. Then the tuner 902 outputs an encoded bit stream obtained through demodulation to the multichannel demultiplexer 903. That is, the tuner 902 is used as a sending device in the television device 900 that receives an encoded data stream from an image. coded.
Petition 870180168063, of 12/27/2018, p. 252/306
102/112 [00259] The multi-channel demultiplexer 903 separates, from the encoded bit stream, a video stream and an audio stream from a program to be watched, and outputs the separate streams to decoder 904. The multi channel demultiplexer 903 further extracts auxiliary data from the encoded bit stream, for example, an electronic program guide, and provides the extracted data to the 910 controller. If the encoded bit stream is mixed, the multi channel demultiplexer 903 can demystify the stream encoded bits.
[00260] The decoder 904 decodes the video stream and the audio stream that are inserted by the multi channel demultiplexer 903. Then the decoder 904 outputs the video data generated through decoding to the 905 video signal processor. The decoder 904 still outputs the audio data generated through decoding to the 907 audio signal processor.
[00261] The video signal processor 905 reproduces the video data entered by the decoder 904, and displays the video data on the display unit 906.0 Video signal processor 905 can also display, on the display unit 906, a panel of application provided through a network. In addition, the 905 video signal processor can perform additional processing, for example, removing noise, on the video data based on an adjustment. The 905 video signal processor can also generate a GUI (graphical user interface) image and make the generated image superimposed on an output image.
[00262] The display unit 906 is driven by a trigger signal provided by the video signal processor 905, and displays a video or an image on a video screen of a display device, for example, a liquid crystal display , a plasma display, or an OELD (organic electroluminescent display).
[00263] The audio signal processor 907 performs a
Petition 870180168063, of 12/27/2018, p. 253/306
103/112 reproduction processing, for example, digital to analog conversion and amplification, over audio data entered by decoder 904, and outputs the audio using speaker 908. In addition, the audio signal processor 907 can perform additional processing , for example, noise removal, on the audio data.
[00264] The external interface 909 is an interface configured to connect the 900 television device and an external device or network. For example, the video stream or audio stream received by the external interface 909 can be decoded by the decoder 904. That is, the external interface 909 is also used as a sending device in the television device 900 that receives the encoded data stream. of the encoded image.
[00265] The 910 controller includes a processor and a memory. The memory stores a program to be executed by the processor, program data, auxiliary data, data obtained through the network, or the like. For example, when the television device 900 is turned on, the program stored in memory is read and executed by the processor. The processor controls an operation of the television device 900 based on a control signal inserted from the 911 user interface.
[00266] User interface 911 is connected to controller 910. For example, user interface 911 includes a button and a switch that are used by a user to operate the television device 900 and a receiving unit configured to receive a remote control signal. The 911 user interface detects an operation performed by the user using the components, generates a control signal, and outputs the generated control signal to the 910 controller.
[00267] The 912 bus implements a mutual connection between the
Petition 870180168063, of 12/27/2018, p. 254/306
104/112 tuner 902, multi-channel demultiplexer 903, decoder 904, video signal processor 905, audio signal processor 907, external interface 909, and controller 910.
[00268] In the television device 900 that has this structure, the decoder 904 has a function of the video decoding apparatus according to the above modality.
[00269] Figure 22 is a schematic structural diagram of a mobile phone application applicable according to an embodiment of the present invention. A mobile phone device 920 includes an antenna 921, a communications unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processor 927, a multi-channel demultiplexer 928, a recording / playback unit 929, a display unit 930, a controller 931, an operating unit 932, and a bus 933.
[00270] Antenna 921 is connected to communications unit 922. Speaker 924 and microphone 925 are connected to audio codec 923. Operating unit 932 is connected to controller 931. Bus 933 implements mutual connection between the communications 922, audio codec 923, camera unit 926, image processor 927, multi-channel demultiplexer 928, recording / playback unit 929, display unit 930, and controller 931.
[00271] The mobile phone device 920 performs operations in various operating modes, for example, sending / receiving audio signals, sending / receiving email and image data, photographing images, and recording data. The various modes of operation include a voice call mode, a data communication mode, an image mode, and a videophone mode.
[00272] In voice call mode, an analog audio signal
Petition 870180168063, of 12/27/2018, p. 255/306
105/112 generated by microphone 925 is provided for audio codec 923. Audio codec 923 converts the analog audio signal to audio data, performs an analog to digital conversion on the converted audio data, and compresses the audio data. audio. Then the audio codec 923 outputs the audio data obtained as a result of compression to the communications unit 922. The communications unit 922 encodes and modulates the audio data to be a signal to be sent. Then the communications unit 922 sends the generated signal to be sent to a base station using the antenna 921. The communications unit 922 further amplifies a radio signal received using the antenna 921, and performs frequency conversion over a radio signal. amplifier received using antenna 921, to obtain a received signal obtained through frequency conversion. The communications unit 922 then demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the 923 audio codec. The 923 audio codec decompresses the audio data and performs digital to analog conversion on the audio data to generate an analog audio signal. Then audio codec 923 provides the generated audio signal to speaker 924 in order to output audio from speaker 924.
[00273] In data communication mode, for example, controller 931 generates, based on an operation performed by a user using operation unit 932, text data to be included in an email. Controller 931 still displays text on display unit 930. Controller 931 still generates email data in response to a sending instruction that comes from the user through operator unit 932, and sends the generated email data to the unit communications 922. Communications unit 922 encodes and modulates email data to generate a signal to be sent. Then the communications unit 922 sends the signal to be sent
Petition 870180168063, of 12/27/2018, p. 256/306
106/112 generated for a base station using antenna 921. Communications unit 922 further amplifies a radio signal received using antenna 921, and performs frequency conversion over an amplified radio signal received using antenna 921, to obtain a received signal obtained through frequency conversion. The communications unit 922 then demodulates and decodes the received signal to restore the email data, and sends the restored email data to the controller 931. Controller 931 displays the email content on the display unit 930, and stores the email data. email on a storage medium of the 929 recording / playback unit. [00274] The recording / playback unit 929 includes a readable / writable storage medium. For example, the storage medium can be an internal storage medium, or it can be an externally installed storage medium, for example, a hard disk, a magnetic disk, a magneto-optical disk, a USB memory (Universal Serial Bus) , or a memory card. [00275] In image mode, camera unit 926 performs image formation on an object to generate image data, and outputs the generated image data to the 927 image processor. The 927 image processor encodes the image data inserted from the camera unit 926, and stores a stream of encrypted data on a storage medium of the storage / playback unit 929.
[00276] In videophone mode, the multi channel demultiplexer 928 multiplexes a video stream encoded by the image processor 927 and an audio stream inserted by the audio codec 923, and outputs a multiplexed stream to the communications unit 922. A communications unit 922 encodes and modulates the multiplexed stream to generate a signal to be sent. Then the communications unit 922 sends the generated signal to be sent to a base station
Petition 870180168063, of 12/27/2018, p. 257/306
107/112 using antenna 921. Communications unit 922 still amplifies a radio signal received using antenna 921, and performs frequency conversion on an amplified radio signal received using antenna 921, to obtain the received signal obtained through frequency conversion. The signal to be sent and the signal received may include an encoded bit stream. The communications unit 922 then demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the 928 multichannel demultiplexer. The 928 multichannel demultiplexer separates a video stream and an audio stream from a stream. input, outputs the video stream to the 927 image processor, and outputs the audio stream to the 923 audio codec. The 927 image processor decodes the video stream to generate video data. The video data is provided for the display unit 930, and a series of images is displayed by the display unit 930. The audio codec 923 decompresses the audio stream and performs digital to analog conversion over the audio stream to generate a analog audio signal. Then audio codec 923 provides the generated audio signal to speaker 924 in order to output audio from speaker 924.
[00277] In the mobile phone device 920 that has this structure, the image processor 927 has functions of the video encoding device and the video decoding device according to the above modalities.
[00278] In one or more modalities, the functions described can be implemented by hardware, software, firmware, or any combination thereof. If the functions are implemented by the software, the functions can be stored in a computer-readable medium as one or more instructions or lines of code (source code), or sent by a computer-readable medium, and is / are executed by a unit hardware-based processing. The readable medium
Petition 870180168063, of 12/27/2018, p. 258/306
108/112 per computer may include a computer-readable storage medium (which is corresponding to a tangible medium such as a data storage medium) or a communication medium, and the communication medium includes, for example, any medium that promotes data transmission, using a computer program, from one location to another according to a communications protocol. In this mode, the computer-readable medium can roughly correspond to: (1) a non-instantaneous tangible computer-readable storage medium, or (2) for example, a communication medium such as a signal or a carrier. The data storage medium can be any available medium that can be accessed by one or more switches or one or more processors to retrieve an instruction, code, and / or a data structure to implement the technologies described in the present invention. A computer program product may include a computer-readable medium.
[00279] By way of example and without limitation, some computer-readable storage medium may include RAM, ROM, EEPROM, CD-ROM, other optical disk storage or magnetic disk storage, another magnetic storage device , an instant memory, or any other means that can store a required program code in the form of an instruction or a data structure and can be accessed by a computer. In addition, any connection can be properly referred to as a computer-readable medium. For example, if an instruction is sent from a website, server, or other remote source using a coaxial cable, an optical cable, a twisted pair, a digital subscriber line (DSL), or wireless technology (for example, infrared, radio, or microwave), coaxial cable, optical cable, twisted pair, DSL, or wireless technology (for example,
Petition 870180168063, of 12/27/2018, p. 259/306
109/112 (infrared, radio, or microwave) are included in a definition of a medium. However, it should be understood that the computer-readable storage medium and the data storage medium may not include a connection, carrier, signal, or other transitory medium, but are tangible non-transitory storage media. A disc and an optical disc used in this specification include a compact disc (CD), a laser disc, an optical disc, a digital versatile disc (DVD), a floppy disc, and a Blu-ray disc. The magnetic disk usually magnetically copies data, and the optical disk optically copies data using a laser. A combination of the above objects should additionally be included in the scope of the computer-readable medium.
[00280] An instruction can be executed by one or more processors such as one or more digital signal processors (DSP), a general microprocessor, an application specific integrated circuit (ASIC), a network of programmable ports in the field (FPGA) or an equivalent integrated circuit or discrete logic circuits. Therefore, the term processor used in this specification can refer to the structure above, or any other structure that can be applied in implementing the technologies described in this specification. In addition, in some respects, the functions described in this specification may be provided on dedicated hardware and / or software module configured to encode and decode, or may be incorporated into a combined encoder-decoder. In addition, technologies can be fully implemented in one or more circuits or logic elements.
[00281] The technologies in the present invention can be widely implemented by a plurality of apparatus or devices. The devices or devices include a radio device, an integrated circuit (IC), or a set of ICs (for example, a set of
Petition 870180168063, of 12/27/2018, p. 260/306
110/112 chips). In the present invention, various components, modules, and units are described to emphasize the functions of an appliance that is configured to implement the technologies described, and the functions do not necessarily need to be implemented by different hardware units. Precisely, as described above, several units may be combined into one encoder-decoder hardware unit, or may be provided by a set of interoperable hardware units (including one or more processors described above) and appropriate software and / or firmware. [00282] It should be understood that a modality or modality mentioned throughout the specification means that specific aspects, structures, or characteristics relating to the modality are included in at least one modality of the present invention. Therefore, in a modality or in the modality that appears through the entire specification it does not refer to the same modality. In addition, these specific aspects, structures, or characteristics can be combined in one or more modalities using any appropriate mode.
[00283] It should be understood that the sequence numbers of the above processes do not mean execution sequences in various embodiments of the present invention. The sequences of execution of the processes must be determined according to the functions and the internal logic of the processes, and should not be considered as any limitation on the processes of implementing the modalities of the present invention.
[00284] In addition, the terms system and network can be used interchangeably in this specification. The term and / or in this specification describes only one association relationship to describe associated objects and represents that three relationships can exist. For example, A and / or B can represent the following three cases:
Petition 870180168063, of 12/27/2018, p. 261/306
111/112
Only A exists, both A and B exist, and only B exists. In addition, the character / in this specification generally indicates a relationship or between the associated objects.
[00285] It should be understood that in the modalities of this application, B corresponding to A indicates that B is associated with A, and B can be determined according to A. However, it should still be understood that determining B according to A does not mean that B is determined according to A only; that is, B can also be determined according to A and / or other information.
[00286] A person skilled in the art may be aware that, in combination with the examples described in the modalities described in this specification, units and algorithm steps can be implemented by electronic hardware, computer software, or a combination thereof. To clearly describe the interchangeability between hardware and software, the above generally described the compositions and steps of each example, according to the functions. Whether the functions are performed by hardware or software, depends on specific applications and design restrictions of the technical solutions. A person skilled in the art can use different methods to implement the functions described for each specific application, but the implementation should not be considered to be beyond the scope of the present invention.
[00287] It can be clearly understood by a person versed in the technique who, for the purpose of convenient and brief description, for a detailed process of operation of the system, apparatus, and unit above, refer to a corresponding process in the above method modalities, and details are not described here again.
[00288] In several modalities provided in this application, it must be understood that the system, apparatus and method described can be implemented in other modes. For example, the
Petition 870180168063, of 12/27/2018, p. 262/306
112/112 apparatus described is merely an example. For example, the division of units is merely a division of logical function and can be another division in an actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some characteristics may be ignored or may not be implemented. In addition, mutual couplings or direct couplings or communication connections displayed or discussed can be implemented using some interfaces. Indirect couplings or communication connections between devices or units can be implemented in electrical, mechanical, or other forms. [00289] The units described as separate parts may or may not be physically separate, and parts displayed as units may not be physical units, may be located in one position, or may be distributed in a plurality of network units. Some or all of the units can be selected based on real requirements to achieve the objectives of the modalities solutions. [00290] In addition, functional units in the embodiments of the present invention can be integrated into a processing unit, or each of the units can exist physically alone, or two or more units are integrated into one unit. [00291] The above descriptions are merely specific implementations of the present invention, but are not intended to limit the scope of protection of the present invention. Any variation or substitution readily envisioned by a person skilled in the art within the technical scope described in the present invention must fall within the scope of protection of the present invention. Therefore, the scope of protection of the present invention should be subject to the scope of protection of the claims.
权利要求:
Claims (34)
[1]
1. Decoding method, characterized by the fact that it comprises:
analyze (142) a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, in which the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained partitioning the image block with the size of 2Nx2N; and the restriction subimage processing mode comprises:
determine (144) whether the first block of sub-images needs to be further partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images; and determining (146) whether the second block of sub-images needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of subimages needs to be additionally partitioned, analyze the data flow to obtain a partition pattern
Petition 870180168063, of 12/27/2018, p. 264/306
[2]
2/22 of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images sub-images, so that an image block partition pattern obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
2. Decoding method according to the claim
1, characterized by the fact that the partition pattern of the first block of sub-images is of a first set of partition patterns, and the partition pattern of the second block of sub-images is of a second set of partition patterns, in which the first partition pattern set comprises at least one partition pattern different from all partition patterns in the second set of partition patterns.
[3]
3. Decoding method according to the claim
2, characterized by the fact that the second set of partition patterns is a subset of the first set of partition patterns.
[4]
4. Decoding method according to any one of claims 1 to 3, characterized in that a first set of partition patterns for the first block of sub-images with the size of 2NxN comprises a horizontal partition pattern and a partition pattern vertical, and the second set of partition patterns comprises the horizontal partition pattern; and a first set of partition patterns for the first block of subimages with the size of Nx2N comprises a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns comprises the vertical partition pattern.
Petition 870180168063, of 12/27/2018, p. 265/306
3/22
[5]
5. Decoding method according to any one of claims 1 to 4, characterized by the fact that the method still comprises: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is not allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a non-restriction sub-image processing mode, wherein the first block of 2NxN sub-images and the second block of 2NxN sub-images or the first block of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and the non-restriction subimage processing mode comprises:
determine whether the first block of sub-images needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images, in that the partition pattern of the first block of sub-images is that of a first set of partition patterns; and determine whether the second block of sub-images needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images
Petition 870180168063, of 12/27/2018, p. 266/306
4/22 based on the partition pattern obtained from the second block of sub-images, where the partition pattern of the second block of sub-images is a second set of partition patterns, and all the partition patterns in the first set of partition patterns they are the same as all partition patterns in the second set of partition patterns.
[6]
6. Decoding method according to any one of claims 1 to 5, characterized in that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images specifically comprises:
when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or partition horizontal; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition.
Petition 870180168063, of 12/27/2018, p. 267/306
5/22
[7]
7. Decoding method according to any one of claims 1 to 6, characterized by the fact that the 2Nx2N image block is located within a slice I.
[8]
8. Decoding method, characterized by the fact that it comprises:
analyze (162) a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of sub-images 2NxN and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained partitioning the image block with the size of 2Nx2N; and the restriction subimage processing mode comprises:
analyze (164) the data stream to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images with based on the partition pattern of the first block of sub-images; and analyzing (166) the data stream to determine a partition identifier of the second block of sub-images, determining a partition pattern of the second block of sub-images based on the partition identifier of the second block of sub-images, and decoding the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the
Petition 870180168063, of 12/27/2018, p. 268/306
6/22 first block of sub-images, so that an image block partition pattern obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern obtained after the block of images 2Nx2N images be partitioned using the quadtree partition pattern.
[9]
9. Decoding method according to the claim
8, characterized by the fact that the partition pattern of the first block of sub-images is of a first set of partition patterns, and the partition pattern of the second block of sub-images is of a second set of partition patterns, in which the first partition pattern set comprises at least one partition pattern different from all partition patterns in the second set of partition patterns.
[10]
10. Decoding method according to claim
9, characterized by the fact that the second set of partition patterns is a subset of the first set of partition patterns.
[11]
11. Decoding method according to any one of claims 8 to 10, characterized by the fact that for the first block of sub-images 2NxN and the second block of sub-images 2ΝχΝ, the first set of partition patterns does not comprise partition, a pattern of horizontal partition, and a vertical partition pattern, and the second set of partition patterns does not comprise partition and the horizontal partition pattern; and for the first block of subimages Nx2N and the second block of subimages Νχ2Ν, the first set of partition patterns does not comprise partition, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not understand partition and the vertical partition pattern.
[12]
12. Decoding method according to any one
Petition 870180168063, of 12/27/2018, p. 269/306
7/22 of claims 8 to 11, characterized by the fact that the partition pattern of the first block of sub-images is different from the partition pattern of the second block of sub-images, and the partition pattern is direction partition.
[13]
13. Decoding method according to any of claims 8 to 12, characterized in that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images which comprises:
when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without horizontal or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without partition, partition vertical, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[14]
14. Decoding method according to any of claims 8 to 13, characterized in that the method
Petition 870180168063, of 12/27/2018, p. 270/306
8/22 further comprises:
when partitioning the block of 2Nx2N images using a quadtree partition pattern is not allowed, processing the encoded data streams of the first block of sub-images and the second block of sub-images in a non-restriction sub-image processing mode, where the mode non-constraint subimage processing system comprises:
analyze the data flow to determine a partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the pattern partition of the first block of sub-images; and analyze the data flow to determine a partition identifier of the second block of sub-images, determine a partition pattern of the second block of sub-images based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on partition pattern of the second block of sub-images, in which the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images are selected from the same set of partition patterns.
[15]
15. Decoding method according to any of claims 8 to 14, characterized by the fact that the 2Nx2N image block is located within a slice I.
[16]
16. Coding method, characterized by the fact that it comprises:
when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a
Petition 870180168063, of 12/27/2018, p. 271/306
9/22 second block of Nx2N sub-images in a constraint sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained by partitioning the image block with the size of 2Nx2N; and the restriction subimage processing mode comprises:
determine whether the first block of sub-images needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block subimage and the first partitioned subimage block; and determine whether a second block of sub-images needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partition the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of sub-images and the second block of sub-images partitioned, in which the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a pattern of partition block of images obtained for the second subimage block
Petition 870180168063, of 12/27/2018, p. 272/306
10/22 partitioned and the first partitioned subimage block is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[17]
17. Coding method, characterized by the fact that it comprises:
when (132) partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a restriction sub-image processing mode, where the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block of sub-images Nx2N are obtained by partitioning the block of images with the size of 2Nx2N; and the restriction subimage processing mode comprises:
determining (134) a partition pattern of the first subimage, encoding the partition pattern of the first block of images, and encoding the first block of subimages based on the partition pattern of the first block of subimages; and determining (136) a partition pattern of the second block of sub-images, encoding the partition pattern of the second block of images, and encoding the second block of sub-images based on the partition pattern of the second block of sub-images, where the pattern of partition of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern of partition block of images obtained for the second block of sub-images partitioned and the first block of sub-images partitioned
Petition 870180168063, of 12/27/2018, p. 273/306
11/22 is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[18]
18. Decoding device, characterized by the fact that it comprises:
a constraint decoding determination module (182), configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, wherein the first block of 2NxN sub-images and the second block of 2NxN sub-images or the first block of 2 of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and a restriction decoding module (184) that is configured to implement the restriction subimage processing mode, and which comprises:
a first subimage processing module (1842), configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images; and a second subimage processing module
Petition 870180168063, of 12/27/2018, p. 274/306
12/22 (1844), configured to: determine if the second block of sub-images needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, in that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images, so that a pattern of partition block of images obtained for the second block of sub-images partitioned and the first block of sub-images partitioned is different of an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[19]
19. Decoding apparatus according to claim 18, characterized in that the partition pattern of the first block of subimages is of a first set of partition patterns, and the partition pattern of the second block of subimages is of a second set of partition patterns, where the first set of partition patterns comprises at least one partition pattern different from all partition patterns in the second set of partition patterns.
[20]
20. Decoding apparatus according to claim 19, characterized in that the second set of partition patterns is a subset of the first set of partition patterns.
[21]
21. Decoding apparatus according to any one of claims 18 to 20, characterized by the fact that a first
Petition 870180168063, of 12/27/2018, p. 275/306
13/22 set of partition patterns for the first block of sub-images with the size of 2NxN comprises a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns comprises the horizontal partition pattern; and a first set of partition patterns for the first block of subimages with the size of Nx2N comprises a horizontal partition pattern and a vertical partition pattern, and the second set of partition patterns comprises the vertical partition pattern.
[22]
22. Decoding apparatus according to any one of claims 18 to 21, characterized by the fact that the restriction decoding determination module is further configured for: when partitioning a block of images with a size of 2Nx2N using a partition pattern of quadtree is not allowed, process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N subimages and a second block of Nx2N subimages in a non-restriction subimage processing mode, where the first block 2NxN sub-images and the second 2NxN sub-images block or the first Nx2N sub-images block and the second Nx2N sub-images block are obtained by partitioning the image block with the size of 2Nx2N; and the decoding apparatus further comprises a non-restriction decoding module (186) which is configured to implement the non-restriction subimage processing mode and which comprises:
a third subimage processing module (1862), configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, decode a stream of encoded data from the first block of sub-images; or when the first block of sub-images needs to be additionally
Petition 870180168063, of 12/27/2018, p. 276/306
14/22 partitioned, analyze the data flow to obtain a partition pattern of the first block of sub-images, and decode the first block of sub-images based on the partition pattern obtained from the first block of sub-images, where the partition pattern of the first block of sub-images is from a first set of partition patterns; and a fourth subimage processing module (1864), configured to determine whether the second subimage block needs to be further partitioned; and when the second block of sub-images does not need to be further partitioned, decode a stream of encoded data from the second block of sub-images; or when the second block of sub-images needs to be further partitioned, analyze the data flow to obtain a partition pattern of the second block of sub-images, and decode the second block of sub-images based on the partition pattern obtained from the second block of sub-images, in that the partition pattern of the second block of subimages is that of a second set of partition patterns, and all of the partition patterns in the first set of partition patterns are the same as all of the partition patterns in the second set of partition patterns.
[23]
23. Decoding method according to any of claims 18 to 22, characterized in that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images specifically comprises:
when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is horizontal partition; or when the first block of subimages and the second block
Petition 870180168063, of 12/27/2018, p. 277/306
15/22 of sub-images have a size of 2ΝχΝ and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is vertical partition or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is horizontal partition or vertical partition.
[24]
24. Decoding apparatus according to any one of claims 18 to 23, characterized by the fact that the 2Nx2N image block is located within a slice I.
[25]
25. Decoding device, characterized by the fact that it comprises:
a restriction decoding determination module (212), configured to: analyze a data stream, and when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, processing a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, wherein the first block of 2NxN sub-images and the second block of 2NxN sub-images or the first block of 2 of Nx2N sub-images and the second block of Nx2N sub-images are obtained by partitioning the block of images with the size of 2Nx2N; and a restriction decoding module (214) that is
Petition 870180168063, of 12/27/2018, p. 278/306
16/22 configured to implement the restriction subimage processing mode and comprising:
a first subimage processing module (2142), configured to: analyze the data flow to determine a partition identifier of the first block of subimages, determine a partition pattern of the first block of subimages based on the partition identifier of the first block sub-images, and decode the first block of sub-images based on the partition pattern of the first block of sub-images; and a second subimage processing module (2144), configured to: analyze the data flow to determine a partition identifier of the second block of subimages, determine a partition pattern of the second block of subimages based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a image block partition pattern obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[26]
26. Decoding apparatus according to claim 25, characterized in that the partition pattern of the first block of sub-images is of a first set of partition patterns, and the partition pattern of the second block of sub-images is of a second partition pattern set, where the first set of partition patterns comprises at least one partition pattern different from all partition patterns in the second set
Petition 870180168063, of 12/27/2018, p. 279/306
17/22 of partition patterns.
[27]
27. Decoding apparatus according to claim 26, characterized in that the second set of partition patterns is a subset of the first set of partition patterns.
[28]
28. Decoding apparatus according to any one of claims 25 to 27, characterized by the fact that for the first block of sub-images 2NxN and the second block of sub-images 2ΝχΝ, the first set of partition patterns does not comprise partition, a pattern of horizontal partition, and a vertical partition pattern, and the second set of partition patterns does not comprise partition and the horizontal partition pattern; and for the first block of subimages Nx2N and the second block of subimages Νχ2Ν, the first set of partition patterns does not comprise partition, a horizontal partition pattern, and a vertical partition pattern, and the second set of partition patterns does not understand partition and the vertical partition pattern.
[29]
29. Decoding apparatus according to any one of claims 25 to 28, characterized in that the partition pattern of the first block of sub-images is different from the partition pattern of the second block of sub-images, and the partition pattern is partition of direction.
[30]
30. Decoding apparatus according to any one of claims 25 to 29, characterized in that the partition pattern of the second block of sub-images is restricted by the partition pattern of the first block of sub-images which comprises:
when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a vertical partition pattern, the partition pattern of the second block of sub-images is without partition or
Petition 870180168063, of 12/27/2018, p. 280/306
18/22 horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of 2NxN and the partition pattern of the first block of sub-images is a non-vertical partition pattern, the partition pattern of the second block of sub-images is without partition, partition vertical, or horizontal partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is horizontal partition, the partition pattern of the second block of sub-images is without partition or vertical partition; or when the first block of sub-images and the second block of sub-images have a size of Nx2N and the partition pattern of the first block of sub-images is a non-horizontal partition, the partition pattern of the second block of sub-images is without partition, horizontal partition, or vertical partition.
[31]
31. Decoding apparatus according to any one of claims 28 to 30, characterized by the fact that the restriction decoding determination module is further configured for: when partitioning the 2Nx2N image block using a quadtree partition pattern it is not allowed, to process encoded data streams from the first subimage block and the second subimage block in a non-constraint subimage processing mode; and correspondingly, the decoding apparatus still comprises:
a non-restriction decoding module (216) which is configured to implement the non-restriction subimage processing mode and which comprises:
a third subimage processing module (2162), configured to: analyze the data flow to determine a
Petition 870180168063, of 12/27/2018, p. 281/306
19/22 partition identifier of the first block of sub-images, determine a partition pattern of the first block of sub-images based on the partition identifier of the first block of sub-images, and decode the first block of sub-images based on the partition pattern of the first block sub-images; and a fourth subimage processing module (2164), configured to: analyze the data flow to determine a partition identifier of the second block of subimages, determine a partition pattern of the second block of subimages based on the partition identifier of the second block of sub-images, and decode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the first block of sub-images and the partition pattern of the second block of sub-images are selected from the same set of partition patterns.
[32]
32. Decoding apparatus according to any one of claims 25 to 31, characterized by the fact that the 2Nx2N image block is located within a slice I.
[33]
33. Coding device, characterized by the fact that it comprises:
a restriction coding determination module (172), configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN subimages and a second block of 2NxN subimages or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, wherein the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block Nx2N subimages are obtained by partitioning the image block with the
Petition 870180168063, of 12/27/2018, p. 282/306
20/22 size of 2Nx2N; and a restriction encoding module (174) which is configured to implement the restriction subimage processing mode and which comprises:
a first subimage processing module (1742), configured to: determine whether the first subimage block needs to be additionally partitioned; and when the first block of sub-images does not need to be further partitioned, encode the first block of sub-images to generate a stream of encoded data; or when the first block of sub-images needs to be additionally partitioned, determine a partition pattern of the first block of sub-images, partition the first block of sub-images based on the partition pattern of the first block of sub-images, and encode the partition pattern of the first block subimage and the first partitioned subimage block; and a second subimage processing module (1744), configured to: determine whether the second subimage block needs to be additionally partitioned; and when the second block of sub-images does not need to be further partitioned, encode the second block of sub-images to generate a stream of encoded data; or when the second block of sub-images needs to be further partitioned, determine a partition pattern of the second block of sub-images, partitioning the second block of sub-images based on the partition pattern of the second block of sub-images, and encode the image partition pattern of the second block of images and the second block of sub-images partitioned, in which the partition pattern of the second block of sub-images is constrained by the partition pattern of the first block of sub-images, so that a partition block pattern of images obtained for the second partitioned subimage block and the first subimage block
Petition 870180168063, of 12/27/2018, p. 283/306
21/22 partitioned is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
[34]
34. Coding device, characterized by the fact that it comprises:
a restriction encoding determination module (192), configured for: when partitioning a block of images with a size of 2Nx2N using a quadtree partition pattern is allowed, process a first block of 2NxN sub-images and a second block of 2NxN sub-images or a first block of Nx2N sub-images and a second block of Nx2N sub-images in a constraint sub-image processing mode, wherein the first block of 2NxN sub-images and the second block of sub-images 2NxN or the first block of sub-images Nx2N and the second block Nx2N subimages are obtained by partitioning the image block with the size of 2Nx2N; and a restriction coding module (194) which is configured to implement the restriction subimage processing mode and which comprises:
a first subimage processing module (1942), configured to: determine a partition pattern of the first block of sub-images, encode the partition pattern of the first block of images, and encode the first block of sub-images based on the partition pattern of the first block of sub-images; and a second subimage processing module (1944), configured to: determine a partition pattern of the second block of sub-images, encode the partition pattern of the second block of images, and encode the second block of sub-images based on the partition pattern of the second block of sub-images, where the partition pattern of the second block of sub-images is constrained by the pattern
Petition 870180168063, of 12/27/2018, p. 284/306
22/22 partition of the first block of sub-images, so that an image block partition pattern obtained for the second partitioned sub-image block and the first partitioned sub-image block is different from an image block partition pattern obtained after the 2Nx2N image block is partitioned using the quadtree partition pattern.
类似技术:
公开号 | 公开日 | 专利标题
BR112018077218A2|2020-01-28|encoding method and apparatus and decoding method and apparatus
RU2693307C1|2019-07-02|Video encoding method and device and video decoding method and device, which jointly use sao parameters between color components
ES2702950T3|2019-03-06|Inverse color-space transformation for video encoded with lossless lossless
EP3011738A1|2016-04-27|Adaptive color transforms for video coding
WO2016057782A1|2016-04-14|Boundary filtering and cross-component prediction in video coding
US20190327483A1|2019-10-24|Picture prediction method and related device
KR20190020083A|2019-02-27|Encoding method and apparatus and decoding method and apparatus
KR20160076309A|2016-06-30|Method and Apparatus for Encoding and Method and Apparatus for Decoding
WO2017129023A1|2017-08-03|Decoding method, encoding method, decoding apparatus, and encoding apparatus
CN112690000B|2022-02-18|Apparatus and method for inverse quantization
CN109565588B|2020-09-08|Chroma prediction method and device
CN112954367A|2021-06-11|Encoder, decoder and corresponding methods using palette coding
CN113411613A|2021-09-17|Encoder, decoder and corresponding methods for enabling high level flags using DCT2
CN113455005A|2021-09-28|Deblocking filter for sub-partition boundaries generated by intra sub-partition coding tools
BR112021009848A2|2021-08-17|encoder, decoder and corresponding methods for inter-prediction.
KR20220035154A|2022-03-21|Image encoding/decoding method, apparatus and method of transmitting bitstream for signaling chroma component prediction information according to whether or not the palette mode is applied
KR20210008080A|2021-01-20|Encoder, decoder and corresponding method used for the conversion process
KR20220024117A|2022-03-03|Signaling of Chroma Quantization Parameter | Mapping Tables
TW202141979A|2021-11-01|Methods for quantization parameter control for video coding with joined pixel/transform based quantization
KR101757464B1|2017-07-27|Method and Apparatus for Encoding and Method and Apparatus for Decoding
KR20220012355A|2022-02-03|Encoders, decoders and corresponding methods of chroma quantization control
WO2019219066A1|2019-11-21|Coding and decoding methods and devices
WO2020182052A1|2020-09-17|An encoder, a decoder and corresponding methods restricting size of sub-partitions from intra sub-partition coding mode tool
BR112021013565A2|2021-09-21|ENCODER, DECODER, NON-TRANSENTIAL COMPUTER-READABLE MEDIUM AND ONE-BLOCK VIDEO ENCODERING METHOD OF AN IMAGE
BR112020026183A2|2021-09-08|VIDEO ENCODING METHOD, ENCODER, DECODER AND COMPUTER PROGRAM PRODUCT
同族专利:
公开号 | 公开日
EP3468190A1|2019-04-10|
US20210044839A1|2021-02-11|
US11245932B2|2022-02-08|
CN107566848A|2018-01-09|
US10812835B2|2020-10-20|
CN107566848B|2020-04-14|
EP3468190A4|2019-05-01|
US20190273950A1|2019-09-05|
WO2018001207A1|2018-01-04|
KR20190019176A|2019-02-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US9049452B2|2011-01-25|2015-06-02|Mediatek Singapore Pte. Ltd.|Method and apparatus for compressing coding unit in high efficiency video coding|
US9210442B2|2011-01-12|2015-12-08|Google Technology Holdings LLC|Efficient transform unit representation|
CN102611885B|2011-01-20|2014-04-30|华为技术有限公司|Encoding and decoding method and device|
JP5810700B2|2011-07-19|2015-11-11|ソニー株式会社|Image processing apparatus and image processing method|
CN102761742B|2012-07-03|2017-06-06|华为技术有限公司|Transform block division methods, transform block divides coding method and the coding/decoding method of parameter|
CN103747272B|2014-01-09|2017-03-01|西安电子科技大学|Fast transform approach for the remaining quaternary tree coding of HEVC|WO2019219066A1|2018-05-16|2019-11-21|华为技术有限公司|Coding and decoding methods and devices|
CN110505482B|2018-05-16|2021-10-26|华为技术有限公司|Encoding and decoding method and device|
JP2020030725A|2018-08-24|2020-02-27|株式会社日立製作所|Equipment analysis support device, equipment analysis support method, and equipment analysis system|
EP3837845A4|2018-09-03|2021-08-04|Huawei Technologies Co., Ltd.|A video encoder, a video decoder and corresponding methods|
BR112021003999A2|2018-09-03|2021-05-25|Huawei Technologies Co., Ltd.|relationship between partition constraint elements|
WO2020048361A1|2018-09-05|2020-03-12|华为技术有限公司|Video decoding method and video decoder|
CN111327894A|2018-12-15|2020-06-23|华为技术有限公司|Block division method, video encoding and decoding method and video encoder and decoder|
WO2020119742A1|2018-12-15|2020-06-18|华为技术有限公司|Block division method, video encoding and decoding method, and video codec|
CN111327899A|2018-12-16|2020-06-23|华为技术有限公司|Video decoder and corresponding method|
WO2020135409A1|2018-12-24|2020-07-02|华为技术有限公司|Video decoding method and apparatus, and decoding device|
CN111355951A|2018-12-24|2020-06-30|华为技术有限公司|Video decoding method, device and decoding equipment|
WO2021027774A1|2019-08-10|2021-02-18|Beijing Bytedance Network Technology Co., Ltd.|Subpicture dependent signaling in video bitstreams|
US11146824B2|2019-12-30|2021-10-12|Mediatek Inc.|Video encoding or decoding methods and apparatuses related to high-level information signaling|
US20210218966A1|2020-01-10|2021-07-15|Mediatek Inc.|Signaling Quantization Related Parameters|
法律状态:
2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
CN201610512291.1A|CN107566848B|2016-06-30|2016-06-30|Method and device for coding and decoding|
PCT/CN2017/090063|WO2018001207A1|2016-06-30|2017-06-26|Coding and decoding method and apparatus|
[返回顶部]